00:00:00.001 Started by upstream project "autotest-nightly" build number 3921 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3296 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.143 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.143 The recommended git tool is: git 00:00:00.144 using credential 00000000-0000-0000-0000-000000000002 00:00:00.145 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.188 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.240 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.295 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.295 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.426 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.435 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.444 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:06.444 > git config core.sparsecheckout # timeout=10 00:00:06.456 > git read-tree -mu HEAD # timeout=10 00:00:06.472 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:06.488 Commit message: "packer: Add bios builder" 00:00:06.488 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:06.572 [Pipeline] Start of Pipeline 00:00:06.583 [Pipeline] library 00:00:06.584 Loading library shm_lib@master 00:00:06.584 Library shm_lib@master is cached. Copying from home. 00:00:06.597 [Pipeline] node 00:00:06.618 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:00:06.619 [Pipeline] { 00:00:06.628 [Pipeline] catchError 00:00:06.629 [Pipeline] { 00:00:06.638 [Pipeline] wrap 00:00:06.645 [Pipeline] { 00:00:06.650 [Pipeline] stage 00:00:06.652 [Pipeline] { (Prologue) 00:00:06.668 [Pipeline] echo 00:00:06.669 Node: VM-host-SM4 00:00:06.676 [Pipeline] cleanWs 00:00:06.686 [WS-CLEANUP] Deleting project workspace... 00:00:06.686 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.693 [WS-CLEANUP] done 00:00:06.861 [Pipeline] setCustomBuildProperty 00:00:06.945 [Pipeline] httpRequest 00:00:06.961 [Pipeline] echo 00:00:06.963 Sorcerer 10.211.164.101 is alive 00:00:06.971 [Pipeline] httpRequest 00:00:06.976 HttpMethod: GET 00:00:06.976 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.977 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.978 Response Code: HTTP/1.1 200 OK 00:00:06.978 Success: Status code 200 is in the accepted range: 200,404 00:00:06.979 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.023 [Pipeline] sh 00:00:08.303 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.318 [Pipeline] httpRequest 00:00:08.354 [Pipeline] echo 00:00:08.356 Sorcerer 10.211.164.101 is alive 00:00:08.363 [Pipeline] httpRequest 00:00:08.367 HttpMethod: GET 00:00:08.367 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:08.368 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:08.390 Response Code: HTTP/1.1 200 OK 00:00:08.391 Success: Status code 200 is in the accepted range: 200,404 00:00:08.391 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:02.538 [Pipeline] sh 00:01:02.822 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:05.367 [Pipeline] sh 00:01:05.647 + git -C spdk log --oneline -n5 00:01:05.648 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:05.648 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:05.648 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:05.648 d005e023b raid: fix empty slot not updated in sb after resize 00:01:05.648 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:05.666 [Pipeline] writeFile 00:01:05.682 [Pipeline] sh 00:01:05.967 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:05.979 [Pipeline] sh 00:01:06.261 + cat autorun-spdk.conf 00:01:06.261 SPDK_TEST_UNITTEST=1 00:01:06.261 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.261 SPDK_TEST_NVME=1 00:01:06.261 SPDK_TEST_BLOCKDEV=1 00:01:06.261 SPDK_RUN_ASAN=1 00:01:06.261 SPDK_RUN_UBSAN=1 00:01:06.261 SPDK_TEST_RAID5=1 00:01:06.261 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.268 RUN_NIGHTLY=1 00:01:06.270 [Pipeline] } 00:01:06.286 [Pipeline] // stage 00:01:06.301 [Pipeline] stage 00:01:06.303 [Pipeline] { (Run VM) 00:01:06.317 [Pipeline] sh 00:01:06.600 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:06.600 + echo 'Start stage prepare_nvme.sh' 00:01:06.600 Start stage prepare_nvme.sh 00:01:06.600 + [[ -n 4 ]] 00:01:06.600 + disk_prefix=ex4 00:01:06.600 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest_2 ]] 00:01:06.600 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf ]] 00:01:06.600 + source /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf 00:01:06.600 ++ SPDK_TEST_UNITTEST=1 00:01:06.600 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.600 ++ SPDK_TEST_NVME=1 00:01:06.600 ++ SPDK_TEST_BLOCKDEV=1 00:01:06.600 ++ SPDK_RUN_ASAN=1 00:01:06.600 ++ SPDK_RUN_UBSAN=1 00:01:06.600 ++ SPDK_TEST_RAID5=1 00:01:06.600 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.600 ++ RUN_NIGHTLY=1 00:01:06.600 + cd /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:06.600 + nvme_files=() 00:01:06.600 + declare -A nvme_files 00:01:06.600 + backend_dir=/var/lib/libvirt/images/backends 00:01:06.600 + nvme_files['nvme.img']=5G 00:01:06.600 + nvme_files['nvme-cmb.img']=5G 00:01:06.600 + nvme_files['nvme-multi0.img']=4G 00:01:06.600 + nvme_files['nvme-multi1.img']=4G 00:01:06.600 + nvme_files['nvme-multi2.img']=4G 00:01:06.600 + nvme_files['nvme-openstack.img']=8G 00:01:06.600 + nvme_files['nvme-zns.img']=5G 00:01:06.600 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:06.600 + (( SPDK_TEST_FTL == 1 )) 00:01:06.600 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:06.600 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:06.600 + for nvme in "${!nvme_files[@]}" 00:01:06.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:06.600 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.600 + for nvme in "${!nvme_files[@]}" 00:01:06.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:06.600 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.600 + for nvme in "${!nvme_files[@]}" 00:01:06.600 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:06.860 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:06.860 + for nvme in "${!nvme_files[@]}" 00:01:06.860 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:06.860 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.860 + for nvme in "${!nvme_files[@]}" 00:01:06.860 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:06.860 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.860 + for nvme in "${!nvme_files[@]}" 00:01:06.860 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:07.118 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.118 + for nvme in "${!nvme_files[@]}" 00:01:07.118 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:07.118 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.118 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:07.377 + echo 'End stage prepare_nvme.sh' 00:01:07.377 End stage prepare_nvme.sh 00:01:07.388 [Pipeline] sh 00:01:07.670 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:07.670 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2204 00:01:07.670 00:01:07.670 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant 00:01:07.670 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk 00:01:07.670 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:07.670 HELP=0 00:01:07.670 DRY_RUN=0 00:01:07.670 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:01:07.670 NVME_DISKS_TYPE=nvme, 00:01:07.670 NVME_AUTO_CREATE=0 00:01:07.670 NVME_DISKS_NAMESPACES=, 00:01:07.670 NVME_CMB=, 00:01:07.670 NVME_PMR=, 00:01:07.670 NVME_ZNS=, 00:01:07.670 NVME_MS=, 00:01:07.670 NVME_FDP=, 00:01:07.670 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:07.670 SPDK_VAGRANT_VMCPU=10 00:01:07.670 SPDK_VAGRANT_VMRAM=12288 00:01:07.670 SPDK_VAGRANT_PROVIDER=libvirt 00:01:07.670 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:07.670 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:07.670 SPDK_OPENSTACK_NETWORK=0 00:01:07.670 VAGRANT_PACKAGE_BOX=0 00:01:07.670 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:07.670 FORCE_DISTRO=true 00:01:07.670 VAGRANT_BOX_VERSION= 00:01:07.670 EXTRA_VAGRANTFILES= 00:01:07.670 NIC_MODEL=e1000 00:01:07.670 00:01:07.671 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt' 00:01:07.671 /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:10.206 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.773 ==> default: Creating image (snapshot of base box volume). 00:01:10.773 ==> default: Creating domain with the following settings... 00:01:10.773 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1721932091_f3a1cedc15e85be8c59d 00:01:10.773 ==> default: -- Domain type: kvm 00:01:10.773 ==> default: -- Cpus: 10 00:01:10.773 ==> default: -- Feature: acpi 00:01:10.773 ==> default: -- Feature: apic 00:01:10.773 ==> default: -- Feature: pae 00:01:10.773 ==> default: -- Memory: 12288M 00:01:10.773 ==> default: -- Memory Backing: hugepages: 00:01:10.773 ==> default: -- Management MAC: 00:01:10.773 ==> default: -- Loader: 00:01:10.773 ==> default: -- Nvram: 00:01:10.773 ==> default: -- Base box: spdk/ubuntu2204 00:01:10.773 ==> default: -- Storage pool: default 00:01:10.773 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1721932091_f3a1cedc15e85be8c59d.img (20G) 00:01:10.773 ==> default: -- Volume Cache: default 00:01:10.773 ==> default: -- Kernel: 00:01:10.773 ==> default: -- Initrd: 00:01:10.773 ==> default: -- Graphics Type: vnc 00:01:10.773 ==> default: -- Graphics Port: -1 00:01:10.773 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.773 ==> default: -- Graphics Password: Not defined 00:01:10.773 ==> default: -- Video Type: cirrus 00:01:10.773 ==> default: -- Video VRAM: 9216 00:01:10.773 ==> default: -- Sound Type: 00:01:10.773 ==> default: -- Keymap: en-us 00:01:10.773 ==> default: -- TPM Path: 00:01:10.773 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.773 ==> default: -- Command line args: 00:01:10.773 ==> default: -> value=-device, 00:01:10.773 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.773 ==> default: -> value=-drive, 00:01:10.774 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:10.774 ==> default: -> value=-device, 00:01:10.774 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:11.032 ==> default: Creating shared folders metadata... 00:01:11.032 ==> default: Starting domain. 00:01:12.411 ==> default: Waiting for domain to get an IP address... 00:01:24.631 ==> default: Waiting for SSH to become available... 00:01:24.631 ==> default: Configuring and enabling network interfaces... 00:01:29.909 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.179 ==> default: Mounting SSHFS shared folder... 00:01:36.559 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:36.559 ==> default: Checking Mount.. 00:01:37.129 ==> default: Folder Successfully Mounted! 00:01:37.129 ==> default: Running provisioner: file... 00:01:37.698 default: ~/.gitconfig => .gitconfig 00:01:37.958 00:01:37.958 SUCCESS! 00:01:37.958 00:01:37.958 cd to /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:37.958 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:37.958 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:37.958 00:01:37.968 [Pipeline] } 00:01:37.986 [Pipeline] // stage 00:01:37.996 [Pipeline] dir 00:01:37.996 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt 00:01:37.998 [Pipeline] { 00:01:38.013 [Pipeline] catchError 00:01:38.015 [Pipeline] { 00:01:38.029 [Pipeline] sh 00:01:38.310 + vagrant ssh-config --host vagrant 00:01:38.310 + sed -ne /^Host/,$p 00:01:38.310 + tee ssh_conf 00:01:41.599 Host vagrant 00:01:41.599 HostName 192.168.121.164 00:01:41.599 User vagrant 00:01:41.599 Port 22 00:01:41.599 UserKnownHostsFile /dev/null 00:01:41.599 StrictHostKeyChecking no 00:01:41.599 PasswordAuthentication no 00:01:41.599 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:41.599 IdentitiesOnly yes 00:01:41.599 LogLevel FATAL 00:01:41.599 ForwardAgent yes 00:01:41.599 ForwardX11 yes 00:01:41.599 00:01:41.614 [Pipeline] withEnv 00:01:41.617 [Pipeline] { 00:01:41.636 [Pipeline] sh 00:01:41.919 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:41.919 source /etc/os-release 00:01:41.919 [[ -e /image.version ]] && img=$(< /image.version) 00:01:41.919 # Minimal, systemd-like check. 00:01:41.919 if [[ -e /.dockerenv ]]; then 00:01:41.919 # Clear garbage from the node's name: 00:01:41.919 # agt-er_autotest_547-896 -> autotest_547-896 00:01:41.919 # $HOSTNAME is the actual container id 00:01:41.919 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:41.919 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:41.919 # We can assume this is a mount from a host where container is running, 00:01:41.919 # so fetch its hostname to easily identify the target swarm worker. 00:01:41.919 container="$(< /etc/hostname) ($agent)" 00:01:41.919 else 00:01:41.919 # Fallback 00:01:41.919 container=$agent 00:01:41.919 fi 00:01:41.919 fi 00:01:41.919 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:41.919 00:01:42.192 [Pipeline] } 00:01:42.210 [Pipeline] // withEnv 00:01:42.219 [Pipeline] setCustomBuildProperty 00:01:42.234 [Pipeline] stage 00:01:42.236 [Pipeline] { (Tests) 00:01:42.254 [Pipeline] sh 00:01:42.537 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:42.811 [Pipeline] sh 00:01:43.093 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.365 [Pipeline] timeout 00:01:43.365 Timeout set to expire in 1 hr 30 min 00:01:43.367 [Pipeline] { 00:01:43.384 [Pipeline] sh 00:01:43.664 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:44.232 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:44.244 [Pipeline] sh 00:01:44.551 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:44.826 [Pipeline] sh 00:01:45.107 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.384 [Pipeline] sh 00:01:45.666 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:45.926 ++ readlink -f spdk_repo 00:01:45.926 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:45.926 + [[ -n /home/vagrant/spdk_repo ]] 00:01:45.926 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:45.926 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:45.926 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:45.926 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:45.926 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:45.926 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:45.926 + cd /home/vagrant/spdk_repo 00:01:45.926 + source /etc/os-release 00:01:45.926 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:45.926 ++ NAME=Ubuntu 00:01:45.926 ++ VERSION_ID=22.04 00:01:45.926 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:45.926 ++ VERSION_CODENAME=jammy 00:01:45.926 ++ ID=ubuntu 00:01:45.926 ++ ID_LIKE=debian 00:01:45.926 ++ HOME_URL=https://www.ubuntu.com/ 00:01:45.926 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:45.926 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:45.926 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:45.926 ++ UBUNTU_CODENAME=jammy 00:01:45.926 + uname -a 00:01:45.926 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:45.926 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:46.185 Hugepages 00:01:46.185 node hugesize free / total 00:01:46.185 node0 1048576kB 0 / 0 00:01:46.185 node0 2048kB 0 / 0 00:01:46.185 00:01:46.185 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.185 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:46.445 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:46.445 + rm -f /tmp/spdk-ld-path 00:01:46.445 + source autorun-spdk.conf 00:01:46.445 ++ SPDK_TEST_UNITTEST=1 00:01:46.445 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.445 ++ SPDK_TEST_NVME=1 00:01:46.445 ++ SPDK_TEST_BLOCKDEV=1 00:01:46.445 ++ SPDK_RUN_ASAN=1 00:01:46.445 ++ SPDK_RUN_UBSAN=1 00:01:46.445 ++ SPDK_TEST_RAID5=1 00:01:46.445 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.445 ++ RUN_NIGHTLY=1 00:01:46.445 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.445 + [[ -n '' ]] 00:01:46.445 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:46.445 + for M in /var/spdk/build-*-manifest.txt 00:01:46.445 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.445 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.445 + for M in /var/spdk/build-*-manifest.txt 00:01:46.445 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.445 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.445 ++ uname 00:01:46.445 + [[ Linux == \L\i\n\u\x ]] 00:01:46.445 + sudo dmesg -T 00:01:46.445 + sudo dmesg --clear 00:01:46.445 + dmesg_pid=2151 00:01:46.445 + [[ Ubuntu == FreeBSD ]] 00:01:46.445 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.445 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.445 + sudo dmesg -Tw 00:01:46.445 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.445 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.445 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.445 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.445 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.445 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:46.445 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:46.445 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:46.445 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.445 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:46.445 Test configuration: 00:01:46.445 SPDK_TEST_UNITTEST=1 00:01:46.445 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.445 SPDK_TEST_NVME=1 00:01:46.445 SPDK_TEST_BLOCKDEV=1 00:01:46.445 SPDK_RUN_ASAN=1 00:01:46.445 SPDK_RUN_UBSAN=1 00:01:46.445 SPDK_TEST_RAID5=1 00:01:46.445 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.445 RUN_NIGHTLY=1 18:28:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:46.445 18:28:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:46.445 18:28:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.445 18:28:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.445 18:28:46 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.445 18:28:46 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.445 18:28:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.445 18:28:46 -- paths/export.sh@5 -- $ export PATH 00:01:46.445 18:28:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.445 18:28:46 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:46.445 18:28:46 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:46.705 18:28:46 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721932126.XXXXXX 00:01:46.705 18:28:46 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721932126.KynoIg 00:01:46.705 18:28:46 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:46.705 18:28:46 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:46.705 18:28:46 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:46.705 18:28:46 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:46.705 18:28:46 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:46.705 18:28:46 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:46.705 18:28:46 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:46.705 18:28:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.705 18:28:46 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:46.705 18:28:46 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:46.705 18:28:46 -- pm/common@17 -- $ local monitor 00:01:46.705 18:28:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.705 18:28:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.705 18:28:46 -- pm/common@25 -- $ sleep 1 00:01:46.705 18:28:46 -- pm/common@21 -- $ date +%s 00:01:46.705 18:28:46 -- pm/common@21 -- $ date +%s 00:01:46.705 18:28:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721932126 00:01:46.705 18:28:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721932126 00:01:46.705 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721932126_collect-vmstat.pm.log 00:01:46.705 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721932126_collect-cpu-load.pm.log 00:01:47.643 18:28:47 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:47.643 18:28:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:47.643 18:28:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:47.643 18:28:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:47.643 18:28:47 -- spdk/autobuild.sh@16 -- $ date -u 00:01:47.643 Thu Jul 25 18:28:47 UTC 2024 00:01:47.643 18:28:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:47.643 v24.09-pre-321-g704257090 00:01:47.643 18:28:47 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:47.643 18:28:47 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:47.643 18:28:47 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:47.643 18:28:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:47.644 18:28:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.644 ************************************ 00:01:47.644 START TEST asan 00:01:47.644 ************************************ 00:01:47.644 using asan 00:01:47.644 18:28:47 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:47.644 00:01:47.644 real 0m0.000s 00:01:47.644 user 0m0.000s 00:01:47.644 sys 0m0.000s 00:01:47.644 18:28:47 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:47.644 18:28:47 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.644 ************************************ 00:01:47.644 END TEST asan 00:01:47.644 ************************************ 00:01:47.644 18:28:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:47.644 18:28:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:47.644 18:28:47 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:47.644 18:28:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:47.644 18:28:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.644 ************************************ 00:01:47.644 START TEST ubsan 00:01:47.644 ************************************ 00:01:47.644 using ubsan 00:01:47.644 18:28:47 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:47.644 00:01:47.644 real 0m0.000s 00:01:47.644 user 0m0.000s 00:01:47.644 sys 0m0.000s 00:01:47.644 ************************************ 00:01:47.644 END TEST ubsan 00:01:47.644 18:28:47 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:47.644 18:28:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.644 ************************************ 00:01:47.903 18:28:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:47.903 18:28:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:47.903 18:28:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:47.903 18:28:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:47.903 18:28:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:47.903 18:28:47 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:47.903 18:28:47 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:47.903 18:28:47 -- common/autobuild_common.sh@423 -- $ run_test unittest_build _unittest_build 00:01:47.903 18:28:47 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:47.903 18:28:47 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:47.903 18:28:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.903 ************************************ 00:01:47.903 START TEST unittest_build 00:01:47.903 ************************************ 00:01:47.903 18:28:48 unittest_build -- common/autotest_common.sh@1125 -- $ _unittest_build 00:01:47.903 18:28:48 unittest_build -- common/autobuild_common.sh@414 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:47.903 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:47.903 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:48.473 Using 'verbs' RDMA provider 00:02:07.505 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.713 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.972 Creating mk/config.mk...done. 00:02:19.972 Creating mk/cc.flags.mk...done. 00:02:19.972 Type 'make' to build. 00:02:19.972 18:29:20 unittest_build -- common/autobuild_common.sh@415 -- $ make -j10 00:02:20.231 make[1]: Nothing to be done for 'all'. 00:02:35.108 The Meson build system 00:02:35.108 Version: 1.4.0 00:02:35.108 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:35.108 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:35.108 Build type: native build 00:02:35.108 Program cat found: YES (/usr/bin/cat) 00:02:35.108 Project name: DPDK 00:02:35.108 Project version: 24.03.0 00:02:35.108 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:35.108 C linker for the host machine: cc ld.bfd 2.38 00:02:35.108 Host machine cpu family: x86_64 00:02:35.108 Host machine cpu: x86_64 00:02:35.108 Message: ## Building in Developer Mode ## 00:02:35.108 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:35.108 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:35.108 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:35.108 Program python3 found: YES (/usr/bin/python3) 00:02:35.108 Program cat found: YES (/usr/bin/cat) 00:02:35.108 Compiler for C supports arguments -march=native: YES 00:02:35.108 Checking for size of "void *" : 8 00:02:35.108 Checking for size of "void *" : 8 (cached) 00:02:35.108 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:35.108 Library m found: YES 00:02:35.108 Library numa found: YES 00:02:35.108 Has header "numaif.h" : YES 00:02:35.108 Library fdt found: NO 00:02:35.108 Library execinfo found: NO 00:02:35.108 Has header "execinfo.h" : YES 00:02:35.108 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:35.108 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:35.108 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:35.108 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:35.108 Run-time dependency openssl found: YES 3.0.2 00:02:35.108 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:35.108 Library pcap found: NO 00:02:35.108 Compiler for C supports arguments -Wcast-qual: YES 00:02:35.108 Compiler for C supports arguments -Wdeprecated: YES 00:02:35.108 Compiler for C supports arguments -Wformat: YES 00:02:35.108 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:35.108 Compiler for C supports arguments -Wformat-security: YES 00:02:35.109 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.109 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:35.109 Compiler for C supports arguments -Wnested-externs: YES 00:02:35.109 Compiler for C supports arguments -Wold-style-definition: YES 00:02:35.109 Compiler for C supports arguments -Wpointer-arith: YES 00:02:35.109 Compiler for C supports arguments -Wsign-compare: YES 00:02:35.109 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:35.109 Compiler for C supports arguments -Wundef: YES 00:02:35.109 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.109 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:35.109 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:35.109 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.109 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:35.109 Program objdump found: YES (/usr/bin/objdump) 00:02:35.109 Compiler for C supports arguments -mavx512f: YES 00:02:35.109 Checking if "AVX512 checking" compiles: YES 00:02:35.109 Fetching value of define "__SSE4_2__" : 1 00:02:35.109 Fetching value of define "__AES__" : 1 00:02:35.109 Fetching value of define "__AVX__" : 1 00:02:35.109 Fetching value of define "__AVX2__" : 1 00:02:35.109 Fetching value of define "__AVX512BW__" : 1 00:02:35.109 Fetching value of define "__AVX512CD__" : 1 00:02:35.109 Fetching value of define "__AVX512DQ__" : 1 00:02:35.109 Fetching value of define "__AVX512F__" : 1 00:02:35.109 Fetching value of define "__AVX512VL__" : 1 00:02:35.109 Fetching value of define "__PCLMUL__" : 1 00:02:35.109 Fetching value of define "__RDRND__" : 1 00:02:35.109 Fetching value of define "__RDSEED__" : 1 00:02:35.109 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:35.109 Fetching value of define "__znver1__" : (undefined) 00:02:35.109 Fetching value of define "__znver2__" : (undefined) 00:02:35.109 Fetching value of define "__znver3__" : (undefined) 00:02:35.109 Fetching value of define "__znver4__" : (undefined) 00:02:35.109 Library asan found: YES 00:02:35.109 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:35.109 Message: lib/log: Defining dependency "log" 00:02:35.109 Message: lib/kvargs: Defining dependency "kvargs" 00:02:35.109 Message: lib/telemetry: Defining dependency "telemetry" 00:02:35.109 Library rt found: YES 00:02:35.109 Checking for function "getentropy" : NO 00:02:35.109 Message: lib/eal: Defining dependency "eal" 00:02:35.109 Message: lib/ring: Defining dependency "ring" 00:02:35.109 Message: lib/rcu: Defining dependency "rcu" 00:02:35.109 Message: lib/mempool: Defining dependency "mempool" 00:02:35.109 Message: lib/mbuf: Defining dependency "mbuf" 00:02:35.109 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:35.109 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:35.109 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:35.109 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:35.109 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:35.109 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:35.109 Compiler for C supports arguments -mpclmul: YES 00:02:35.109 Compiler for C supports arguments -maes: YES 00:02:35.109 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.109 Compiler for C supports arguments -mavx512bw: YES 00:02:35.109 Compiler for C supports arguments -mavx512dq: YES 00:02:35.109 Compiler for C supports arguments -mavx512vl: YES 00:02:35.109 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:35.109 Compiler for C supports arguments -mavx2: YES 00:02:35.109 Compiler for C supports arguments -mavx: YES 00:02:35.109 Message: lib/net: Defining dependency "net" 00:02:35.109 Message: lib/meter: Defining dependency "meter" 00:02:35.109 Message: lib/ethdev: Defining dependency "ethdev" 00:02:35.109 Message: lib/pci: Defining dependency "pci" 00:02:35.109 Message: lib/cmdline: Defining dependency "cmdline" 00:02:35.109 Message: lib/hash: Defining dependency "hash" 00:02:35.109 Message: lib/timer: Defining dependency "timer" 00:02:35.109 Message: lib/compressdev: Defining dependency "compressdev" 00:02:35.109 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:35.109 Message: lib/dmadev: Defining dependency "dmadev" 00:02:35.109 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:35.109 Message: lib/power: Defining dependency "power" 00:02:35.109 Message: lib/reorder: Defining dependency "reorder" 00:02:35.109 Message: lib/security: Defining dependency "security" 00:02:35.109 Has header "linux/userfaultfd.h" : YES 00:02:35.109 Has header "linux/vduse.h" : YES 00:02:35.109 Message: lib/vhost: Defining dependency "vhost" 00:02:35.109 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.109 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.109 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.109 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.109 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:35.109 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:35.109 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:35.109 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:35.109 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:35.109 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:35.109 Program doxygen found: YES (/usr/bin/doxygen) 00:02:35.109 Configuring doxy-api-html.conf using configuration 00:02:35.109 Configuring doxy-api-man.conf using configuration 00:02:35.109 Program mandb found: YES (/usr/bin/mandb) 00:02:35.109 Program sphinx-build found: NO 00:02:35.109 Configuring rte_build_config.h using configuration 00:02:35.109 Message: 00:02:35.109 ================= 00:02:35.109 Applications Enabled 00:02:35.109 ================= 00:02:35.109 00:02:35.109 apps: 00:02:35.109 00:02:35.109 00:02:35.109 Message: 00:02:35.109 ================= 00:02:35.109 Libraries Enabled 00:02:35.109 ================= 00:02:35.109 00:02:35.109 libs: 00:02:35.109 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.109 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:35.109 cryptodev, dmadev, power, reorder, security, vhost, 00:02:35.109 00:02:35.109 Message: 00:02:35.109 =============== 00:02:35.109 Drivers Enabled 00:02:35.109 =============== 00:02:35.109 00:02:35.109 common: 00:02:35.109 00:02:35.109 bus: 00:02:35.109 pci, vdev, 00:02:35.109 mempool: 00:02:35.109 ring, 00:02:35.109 dma: 00:02:35.109 00:02:35.109 net: 00:02:35.109 00:02:35.109 crypto: 00:02:35.109 00:02:35.109 compress: 00:02:35.109 00:02:35.109 vdpa: 00:02:35.109 00:02:35.109 00:02:35.109 Message: 00:02:35.109 ================= 00:02:35.109 Content Skipped 00:02:35.109 ================= 00:02:35.109 00:02:35.109 apps: 00:02:35.109 dumpcap: explicitly disabled via build config 00:02:35.109 graph: explicitly disabled via build config 00:02:35.109 pdump: explicitly disabled via build config 00:02:35.109 proc-info: explicitly disabled via build config 00:02:35.109 test-acl: explicitly disabled via build config 00:02:35.109 test-bbdev: explicitly disabled via build config 00:02:35.109 test-cmdline: explicitly disabled via build config 00:02:35.109 test-compress-perf: explicitly disabled via build config 00:02:35.109 test-crypto-perf: explicitly disabled via build config 00:02:35.109 test-dma-perf: explicitly disabled via build config 00:02:35.109 test-eventdev: explicitly disabled via build config 00:02:35.109 test-fib: explicitly disabled via build config 00:02:35.109 test-flow-perf: explicitly disabled via build config 00:02:35.109 test-gpudev: explicitly disabled via build config 00:02:35.109 test-mldev: explicitly disabled via build config 00:02:35.109 test-pipeline: explicitly disabled via build config 00:02:35.109 test-pmd: explicitly disabled via build config 00:02:35.109 test-regex: explicitly disabled via build config 00:02:35.109 test-sad: explicitly disabled via build config 00:02:35.109 test-security-perf: explicitly disabled via build config 00:02:35.109 00:02:35.109 libs: 00:02:35.109 argparse: explicitly disabled via build config 00:02:35.109 metrics: explicitly disabled via build config 00:02:35.109 acl: explicitly disabled via build config 00:02:35.109 bbdev: explicitly disabled via build config 00:02:35.109 bitratestats: explicitly disabled via build config 00:02:35.109 bpf: explicitly disabled via build config 00:02:35.109 cfgfile: explicitly disabled via build config 00:02:35.109 distributor: explicitly disabled via build config 00:02:35.109 efd: explicitly disabled via build config 00:02:35.109 eventdev: explicitly disabled via build config 00:02:35.109 dispatcher: explicitly disabled via build config 00:02:35.109 gpudev: explicitly disabled via build config 00:02:35.109 gro: explicitly disabled via build config 00:02:35.109 gso: explicitly disabled via build config 00:02:35.109 ip_frag: explicitly disabled via build config 00:02:35.109 jobstats: explicitly disabled via build config 00:02:35.109 latencystats: explicitly disabled via build config 00:02:35.109 lpm: explicitly disabled via build config 00:02:35.109 member: explicitly disabled via build config 00:02:35.109 pcapng: explicitly disabled via build config 00:02:35.109 rawdev: explicitly disabled via build config 00:02:35.109 regexdev: explicitly disabled via build config 00:02:35.109 mldev: explicitly disabled via build config 00:02:35.109 rib: explicitly disabled via build config 00:02:35.109 sched: explicitly disabled via build config 00:02:35.109 stack: explicitly disabled via build config 00:02:35.109 ipsec: explicitly disabled via build config 00:02:35.109 pdcp: explicitly disabled via build config 00:02:35.109 fib: explicitly disabled via build config 00:02:35.109 port: explicitly disabled via build config 00:02:35.109 pdump: explicitly disabled via build config 00:02:35.109 table: explicitly disabled via build config 00:02:35.109 pipeline: explicitly disabled via build config 00:02:35.109 graph: explicitly disabled via build config 00:02:35.110 node: explicitly disabled via build config 00:02:35.110 00:02:35.110 drivers: 00:02:35.110 common/cpt: not in enabled drivers build config 00:02:35.110 common/dpaax: not in enabled drivers build config 00:02:35.110 common/iavf: not in enabled drivers build config 00:02:35.110 common/idpf: not in enabled drivers build config 00:02:35.110 common/ionic: not in enabled drivers build config 00:02:35.110 common/mvep: not in enabled drivers build config 00:02:35.110 common/octeontx: not in enabled drivers build config 00:02:35.110 bus/auxiliary: not in enabled drivers build config 00:02:35.110 bus/cdx: not in enabled drivers build config 00:02:35.110 bus/dpaa: not in enabled drivers build config 00:02:35.110 bus/fslmc: not in enabled drivers build config 00:02:35.110 bus/ifpga: not in enabled drivers build config 00:02:35.110 bus/platform: not in enabled drivers build config 00:02:35.110 bus/uacce: not in enabled drivers build config 00:02:35.110 bus/vmbus: not in enabled drivers build config 00:02:35.110 common/cnxk: not in enabled drivers build config 00:02:35.110 common/mlx5: not in enabled drivers build config 00:02:35.110 common/nfp: not in enabled drivers build config 00:02:35.110 common/nitrox: not in enabled drivers build config 00:02:35.110 common/qat: not in enabled drivers build config 00:02:35.110 common/sfc_efx: not in enabled drivers build config 00:02:35.110 mempool/bucket: not in enabled drivers build config 00:02:35.110 mempool/cnxk: not in enabled drivers build config 00:02:35.110 mempool/dpaa: not in enabled drivers build config 00:02:35.110 mempool/dpaa2: not in enabled drivers build config 00:02:35.110 mempool/octeontx: not in enabled drivers build config 00:02:35.110 mempool/stack: not in enabled drivers build config 00:02:35.110 dma/cnxk: not in enabled drivers build config 00:02:35.110 dma/dpaa: not in enabled drivers build config 00:02:35.110 dma/dpaa2: not in enabled drivers build config 00:02:35.110 dma/hisilicon: not in enabled drivers build config 00:02:35.110 dma/idxd: not in enabled drivers build config 00:02:35.110 dma/ioat: not in enabled drivers build config 00:02:35.110 dma/skeleton: not in enabled drivers build config 00:02:35.110 net/af_packet: not in enabled drivers build config 00:02:35.110 net/af_xdp: not in enabled drivers build config 00:02:35.110 net/ark: not in enabled drivers build config 00:02:35.110 net/atlantic: not in enabled drivers build config 00:02:35.110 net/avp: not in enabled drivers build config 00:02:35.110 net/axgbe: not in enabled drivers build config 00:02:35.110 net/bnx2x: not in enabled drivers build config 00:02:35.110 net/bnxt: not in enabled drivers build config 00:02:35.110 net/bonding: not in enabled drivers build config 00:02:35.110 net/cnxk: not in enabled drivers build config 00:02:35.110 net/cpfl: not in enabled drivers build config 00:02:35.110 net/cxgbe: not in enabled drivers build config 00:02:35.110 net/dpaa: not in enabled drivers build config 00:02:35.110 net/dpaa2: not in enabled drivers build config 00:02:35.110 net/e1000: not in enabled drivers build config 00:02:35.110 net/ena: not in enabled drivers build config 00:02:35.110 net/enetc: not in enabled drivers build config 00:02:35.110 net/enetfec: not in enabled drivers build config 00:02:35.110 net/enic: not in enabled drivers build config 00:02:35.110 net/failsafe: not in enabled drivers build config 00:02:35.110 net/fm10k: not in enabled drivers build config 00:02:35.110 net/gve: not in enabled drivers build config 00:02:35.110 net/hinic: not in enabled drivers build config 00:02:35.110 net/hns3: not in enabled drivers build config 00:02:35.110 net/i40e: not in enabled drivers build config 00:02:35.110 net/iavf: not in enabled drivers build config 00:02:35.110 net/ice: not in enabled drivers build config 00:02:35.110 net/idpf: not in enabled drivers build config 00:02:35.110 net/igc: not in enabled drivers build config 00:02:35.110 net/ionic: not in enabled drivers build config 00:02:35.110 net/ipn3ke: not in enabled drivers build config 00:02:35.110 net/ixgbe: not in enabled drivers build config 00:02:35.110 net/mana: not in enabled drivers build config 00:02:35.110 net/memif: not in enabled drivers build config 00:02:35.110 net/mlx4: not in enabled drivers build config 00:02:35.110 net/mlx5: not in enabled drivers build config 00:02:35.110 net/mvneta: not in enabled drivers build config 00:02:35.110 net/mvpp2: not in enabled drivers build config 00:02:35.110 net/netvsc: not in enabled drivers build config 00:02:35.110 net/nfb: not in enabled drivers build config 00:02:35.110 net/nfp: not in enabled drivers build config 00:02:35.110 net/ngbe: not in enabled drivers build config 00:02:35.110 net/null: not in enabled drivers build config 00:02:35.110 net/octeontx: not in enabled drivers build config 00:02:35.110 net/octeon_ep: not in enabled drivers build config 00:02:35.110 net/pcap: not in enabled drivers build config 00:02:35.110 net/pfe: not in enabled drivers build config 00:02:35.110 net/qede: not in enabled drivers build config 00:02:35.110 net/ring: not in enabled drivers build config 00:02:35.110 net/sfc: not in enabled drivers build config 00:02:35.110 net/softnic: not in enabled drivers build config 00:02:35.110 net/tap: not in enabled drivers build config 00:02:35.110 net/thunderx: not in enabled drivers build config 00:02:35.110 net/txgbe: not in enabled drivers build config 00:02:35.110 net/vdev_netvsc: not in enabled drivers build config 00:02:35.110 net/vhost: not in enabled drivers build config 00:02:35.110 net/virtio: not in enabled drivers build config 00:02:35.110 net/vmxnet3: not in enabled drivers build config 00:02:35.110 raw/*: missing internal dependency, "rawdev" 00:02:35.110 crypto/armv8: not in enabled drivers build config 00:02:35.110 crypto/bcmfs: not in enabled drivers build config 00:02:35.110 crypto/caam_jr: not in enabled drivers build config 00:02:35.110 crypto/ccp: not in enabled drivers build config 00:02:35.110 crypto/cnxk: not in enabled drivers build config 00:02:35.110 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.110 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.110 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.110 crypto/mlx5: not in enabled drivers build config 00:02:35.110 crypto/mvsam: not in enabled drivers build config 00:02:35.110 crypto/nitrox: not in enabled drivers build config 00:02:35.110 crypto/null: not in enabled drivers build config 00:02:35.110 crypto/octeontx: not in enabled drivers build config 00:02:35.110 crypto/openssl: not in enabled drivers build config 00:02:35.110 crypto/scheduler: not in enabled drivers build config 00:02:35.110 crypto/uadk: not in enabled drivers build config 00:02:35.110 crypto/virtio: not in enabled drivers build config 00:02:35.110 compress/isal: not in enabled drivers build config 00:02:35.110 compress/mlx5: not in enabled drivers build config 00:02:35.110 compress/nitrox: not in enabled drivers build config 00:02:35.110 compress/octeontx: not in enabled drivers build config 00:02:35.110 compress/zlib: not in enabled drivers build config 00:02:35.110 regex/*: missing internal dependency, "regexdev" 00:02:35.110 ml/*: missing internal dependency, "mldev" 00:02:35.110 vdpa/ifc: not in enabled drivers build config 00:02:35.110 vdpa/mlx5: not in enabled drivers build config 00:02:35.110 vdpa/nfp: not in enabled drivers build config 00:02:35.110 vdpa/sfc: not in enabled drivers build config 00:02:35.110 event/*: missing internal dependency, "eventdev" 00:02:35.110 baseband/*: missing internal dependency, "bbdev" 00:02:35.110 gpu/*: missing internal dependency, "gpudev" 00:02:35.110 00:02:35.110 00:02:35.110 Build targets in project: 85 00:02:35.110 00:02:35.110 DPDK 24.03.0 00:02:35.110 00:02:35.110 User defined options 00:02:35.110 buildtype : debug 00:02:35.110 default_library : static 00:02:35.110 libdir : lib 00:02:35.110 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:35.110 b_sanitize : address 00:02:35.110 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:02:35.110 c_link_args : 00:02:35.110 cpu_instruction_set: native 00:02:35.110 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:35.110 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,argparse,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:35.110 enable_docs : false 00:02:35.110 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:35.110 enable_kmods : false 00:02:35.110 max_lcores : 128 00:02:35.110 tests : false 00:02:35.110 00:02:35.110 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.110 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:35.110 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:35.110 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:35.110 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:35.110 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:35.110 [5/268] Linking static target lib/librte_log.a 00:02:35.110 [6/268] Linking static target lib/librte_kvargs.a 00:02:35.110 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.110 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.110 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.110 [10/268] Linking static target lib/librte_telemetry.a 00:02:35.110 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.110 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:35.110 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.110 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:35.110 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.110 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.110 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.110 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.110 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.110 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.110 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.111 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.111 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.111 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.111 [25/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.111 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.111 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.111 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:35.111 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.111 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.111 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:35.111 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.111 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:35.111 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:35.111 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.111 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.111 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:35.111 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:35.111 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.111 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.111 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.111 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.111 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.111 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.111 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.111 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.111 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:35.111 [48/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.111 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.111 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.370 [51/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.370 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.370 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.370 [54/268] Linking target lib/librte_log.so.24.1 00:02:35.370 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.629 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.629 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:35.629 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.629 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.629 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.629 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.629 [62/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.629 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.629 [64/268] Linking target lib/librte_kvargs.so.24.1 00:02:35.888 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.888 [66/268] Linking target lib/librte_telemetry.so.24.1 00:02:35.888 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:35.888 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.888 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.888 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:35.888 [71/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.888 [72/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:35.888 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.146 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.146 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:36.146 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.146 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.147 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:36.147 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:36.147 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:36.405 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:36.405 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:36.405 [83/268] Linking static target lib/librte_ring.a 00:02:36.405 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:36.405 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:36.405 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:36.405 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:36.664 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:36.664 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.664 [90/268] Linking static target lib/librte_eal.a 00:02:36.664 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:36.664 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:36.664 [93/268] Linking static target lib/librte_mempool.a 00:02:36.923 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:36.923 [95/268] Linking static target lib/librte_rcu.a 00:02:36.923 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:36.923 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:36.923 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:36.923 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:37.183 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:37.183 [101/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:37.183 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:37.183 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.183 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:37.441 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:37.441 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:37.441 [107/268] Linking static target lib/librte_mbuf.a 00:02:37.442 [108/268] Linking static target lib/librte_net.a 00:02:37.442 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:37.442 [110/268] Linking static target lib/librte_meter.a 00:02:37.442 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.442 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:37.700 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.700 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:37.958 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:37.958 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.958 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:37.958 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.216 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.216 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.475 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.475 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:38.733 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.733 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.733 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.733 [126/268] Linking static target lib/librte_pci.a 00:02:38.991 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:38.991 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.991 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.991 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:38.991 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:38.991 [132/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.991 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.991 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:38.991 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.991 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:39.249 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:39.249 [138/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.249 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:39.249 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:39.249 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:39.249 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:39.249 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.249 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:39.249 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.249 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.249 [147/268] Linking static target lib/librte_cmdline.a 00:02:39.249 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:39.506 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:39.506 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:39.506 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.506 [152/268] Linking static target lib/librte_timer.a 00:02:39.506 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:39.506 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.506 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.765 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:39.765 [157/268] Linking static target lib/librte_ethdev.a 00:02:39.765 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:39.765 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:39.765 [160/268] Linking static target lib/librte_compressdev.a 00:02:40.023 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:40.023 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.023 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:40.023 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.023 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:40.023 [166/268] Linking static target lib/librte_dmadev.a 00:02:40.023 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:40.023 [168/268] Linking static target lib/librte_hash.a 00:02:40.023 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:40.282 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:40.282 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:40.282 [172/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:40.282 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:40.282 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.541 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.541 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.541 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.541 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:40.541 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:40.541 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:40.541 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.799 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.799 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.799 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:40.799 [185/268] Linking static target lib/librte_power.a 00:02:40.799 [186/268] Linking static target lib/librte_cryptodev.a 00:02:41.058 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:41.058 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:41.058 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:41.058 [190/268] Linking static target lib/librte_reorder.a 00:02:41.058 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:41.058 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:41.058 [193/268] Linking static target lib/librte_security.a 00:02:41.316 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.317 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.575 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.575 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.575 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:41.575 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:41.575 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:41.575 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.855 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.855 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.855 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.855 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.855 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:42.113 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:42.113 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:42.113 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:42.113 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:42.371 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:42.371 [212/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.371 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.371 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:42.371 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:42.371 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.371 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.371 [218/268] Linking static target drivers/librte_bus_pci.a 00:02:42.371 [219/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.371 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:42.371 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.630 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.630 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.630 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.630 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.630 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:42.889 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.792 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.324 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.324 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.583 [231/268] Linking target lib/librte_eal.so.24.1 00:02:47.583 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.842 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.842 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:47.842 [235/268] Linking target lib/librte_meter.so.24.1 00:02:47.842 [236/268] Linking target lib/librte_ring.so.24.1 00:02:47.842 [237/268] Linking target lib/librte_pci.so.24.1 00:02:47.842 [238/268] Linking target lib/librte_timer.so.24.1 00:02:47.842 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.842 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.842 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.842 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.842 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.842 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:47.842 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.842 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:48.101 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:48.101 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:48.101 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:48.101 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:48.359 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:48.359 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:48.359 [253/268] Linking target lib/librte_net.so.24.1 00:02:48.359 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:48.359 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:48.359 [256/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.359 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:48.618 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:48.618 [259/268] Linking static target lib/librte_vhost.a 00:02:48.618 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:48.618 [261/268] Linking target lib/librte_security.so.24.1 00:02:48.618 [262/268] Linking target lib/librte_hash.so.24.1 00:02:48.618 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:48.618 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.876 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.876 [266/268] Linking target lib/librte_power.so.24.1 00:02:50.779 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.037 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:51.037 INFO: autodetecting backend as ninja 00:02:51.037 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:51.973 CC lib/ut_mock/mock.o 00:02:52.232 CC lib/ut/ut.o 00:02:52.232 CC lib/log/log.o 00:02:52.232 CC lib/log/log_flags.o 00:02:52.232 CC lib/log/log_deprecated.o 00:02:52.232 LIB libspdk_ut.a 00:02:52.232 LIB libspdk_ut_mock.a 00:02:52.232 LIB libspdk_log.a 00:02:52.490 CC lib/ioat/ioat.o 00:02:52.490 CC lib/dma/dma.o 00:02:52.490 CXX lib/trace_parser/trace.o 00:02:52.490 CC lib/util/base64.o 00:02:52.490 CC lib/util/bit_array.o 00:02:52.490 CC lib/util/cpuset.o 00:02:52.490 CC lib/util/crc16.o 00:02:52.490 CC lib/util/crc32.o 00:02:52.490 CC lib/util/crc32c.o 00:02:52.749 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.749 CC lib/vfio_user/host/vfio_user.o 00:02:52.749 CC lib/util/crc32_ieee.o 00:02:52.749 CC lib/util/crc64.o 00:02:52.749 CC lib/util/dif.o 00:02:52.749 CC lib/util/fd.o 00:02:52.749 LIB libspdk_dma.a 00:02:53.017 CC lib/util/file.o 00:02:53.017 CC lib/util/fd_group.o 00:02:53.017 CC lib/util/hexlify.o 00:02:53.017 CC lib/util/iov.o 00:02:53.017 LIB libspdk_ioat.a 00:02:53.017 CC lib/util/math.o 00:02:53.017 CC lib/util/net.o 00:02:53.017 CC lib/util/pipe.o 00:02:53.017 LIB libspdk_vfio_user.a 00:02:53.017 CC lib/util/strerror_tls.o 00:02:53.017 CC lib/util/string.o 00:02:53.017 CC lib/util/uuid.o 00:02:53.017 CC lib/util/xor.o 00:02:53.017 CC lib/util/zipf.o 00:02:53.281 LIB libspdk_util.a 00:02:53.902 CC lib/rdma_provider/common.o 00:02:53.902 CC lib/env_dpdk/env.o 00:02:53.902 CC lib/env_dpdk/memory.o 00:02:53.902 CC lib/idxd/idxd.o 00:02:53.902 CC lib/json/json_parse.o 00:02:53.902 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.902 CC lib/vmd/vmd.o 00:02:53.902 CC lib/conf/conf.o 00:02:53.902 LIB libspdk_trace_parser.a 00:02:53.902 CC lib/rdma_utils/rdma_utils.o 00:02:53.902 CC lib/vmd/led.o 00:02:53.902 CC lib/env_dpdk/pci.o 00:02:53.902 LIB libspdk_rdma_provider.a 00:02:53.902 CC lib/json/json_util.o 00:02:53.902 LIB libspdk_conf.a 00:02:53.902 CC lib/idxd/idxd_user.o 00:02:53.902 CC lib/env_dpdk/init.o 00:02:53.902 LIB libspdk_rdma_utils.a 00:02:53.902 CC lib/env_dpdk/threads.o 00:02:54.162 CC lib/env_dpdk/pci_ioat.o 00:02:54.162 CC lib/json/json_write.o 00:02:54.162 CC lib/env_dpdk/pci_virtio.o 00:02:54.162 CC lib/env_dpdk/pci_vmd.o 00:02:54.162 CC lib/env_dpdk/pci_idxd.o 00:02:54.162 CC lib/env_dpdk/pci_event.o 00:02:54.162 CC lib/env_dpdk/sigbus_handler.o 00:02:54.162 LIB libspdk_idxd.a 00:02:54.162 CC lib/env_dpdk/pci_dpdk.o 00:02:54.162 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.421 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.421 LIB libspdk_vmd.a 00:02:54.421 LIB libspdk_json.a 00:02:54.680 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.680 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.680 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.680 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.940 LIB libspdk_jsonrpc.a 00:02:54.940 LIB libspdk_env_dpdk.a 00:02:55.199 CC lib/rpc/rpc.o 00:02:55.458 LIB libspdk_rpc.a 00:02:55.718 CC lib/trace/trace_flags.o 00:02:55.718 CC lib/trace/trace.o 00:02:55.718 CC lib/trace/trace_rpc.o 00:02:55.718 CC lib/notify/notify.o 00:02:55.718 CC lib/notify/notify_rpc.o 00:02:55.718 CC lib/keyring/keyring.o 00:02:55.718 CC lib/keyring/keyring_rpc.o 00:02:55.976 LIB libspdk_notify.a 00:02:55.976 LIB libspdk_keyring.a 00:02:55.976 LIB libspdk_trace.a 00:02:56.543 CC lib/thread/thread.o 00:02:56.543 CC lib/thread/iobuf.o 00:02:56.543 CC lib/sock/sock_rpc.o 00:02:56.543 CC lib/sock/sock.o 00:02:57.110 LIB libspdk_sock.a 00:02:57.368 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.368 CC lib/nvme/nvme_ctrlr.o 00:02:57.368 CC lib/nvme/nvme_fabric.o 00:02:57.368 CC lib/nvme/nvme_ns_cmd.o 00:02:57.368 CC lib/nvme/nvme_pcie_common.o 00:02:57.368 CC lib/nvme/nvme_pcie.o 00:02:57.368 CC lib/nvme/nvme_ns.o 00:02:57.368 CC lib/nvme/nvme_qpair.o 00:02:57.368 CC lib/nvme/nvme.o 00:02:57.934 CC lib/nvme/nvme_quirks.o 00:02:57.934 CC lib/nvme/nvme_transport.o 00:02:57.934 CC lib/nvme/nvme_discovery.o 00:02:57.934 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:58.192 LIB libspdk_thread.a 00:02:58.192 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:58.192 CC lib/nvme/nvme_tcp.o 00:02:58.192 CC lib/nvme/nvme_opal.o 00:02:58.451 CC lib/accel/accel.o 00:02:58.451 CC lib/blob/blobstore.o 00:02:58.451 CC lib/blob/request.o 00:02:58.451 CC lib/init/json_config.o 00:02:58.451 CC lib/init/subsystem.o 00:02:58.451 CC lib/init/subsystem_rpc.o 00:02:58.709 CC lib/init/rpc.o 00:02:58.709 CC lib/virtio/virtio.o 00:02:58.709 CC lib/virtio/virtio_vhost_user.o 00:02:58.709 CC lib/virtio/virtio_vfio_user.o 00:02:58.709 CC lib/virtio/virtio_pci.o 00:02:58.709 CC lib/blob/zeroes.o 00:02:58.709 LIB libspdk_init.a 00:02:58.967 CC lib/blob/blob_bs_dev.o 00:02:58.967 CC lib/accel/accel_rpc.o 00:02:58.967 CC lib/accel/accel_sw.o 00:02:58.967 CC lib/nvme/nvme_io_msg.o 00:02:58.967 LIB libspdk_virtio.a 00:02:58.967 CC lib/nvme/nvme_poll_group.o 00:02:58.967 CC lib/event/app.o 00:02:59.225 CC lib/event/reactor.o 00:02:59.225 CC lib/event/log_rpc.o 00:02:59.225 CC lib/event/app_rpc.o 00:02:59.225 CC lib/event/scheduler_static.o 00:02:59.483 LIB libspdk_accel.a 00:02:59.483 CC lib/nvme/nvme_zns.o 00:02:59.483 CC lib/nvme/nvme_stubs.o 00:02:59.483 CC lib/nvme/nvme_auth.o 00:02:59.483 CC lib/nvme/nvme_cuse.o 00:02:59.483 CC lib/nvme/nvme_rdma.o 00:02:59.483 LIB libspdk_event.a 00:02:59.741 CC lib/bdev/bdev.o 00:02:59.741 CC lib/bdev/bdev_rpc.o 00:02:59.741 CC lib/bdev/part.o 00:02:59.741 CC lib/bdev/bdev_zone.o 00:02:59.741 CC lib/bdev/scsi_nvme.o 00:03:00.676 LIB libspdk_nvme.a 00:03:01.612 LIB libspdk_blob.a 00:03:01.871 CC lib/blobfs/blobfs.o 00:03:01.871 CC lib/blobfs/tree.o 00:03:02.130 CC lib/lvol/lvol.o 00:03:02.130 LIB libspdk_bdev.a 00:03:02.389 CC lib/nvmf/ctrlr.o 00:03:02.389 CC lib/nvmf/ctrlr_bdev.o 00:03:02.389 CC lib/scsi/lun.o 00:03:02.389 CC lib/scsi/dev.o 00:03:02.389 CC lib/nvmf/ctrlr_discovery.o 00:03:02.389 CC lib/scsi/port.o 00:03:02.389 CC lib/nbd/nbd.o 00:03:02.389 CC lib/ftl/ftl_core.o 00:03:02.647 CC lib/ftl/ftl_init.o 00:03:02.647 CC lib/ftl/ftl_layout.o 00:03:02.647 CC lib/scsi/scsi.o 00:03:02.906 LIB libspdk_lvol.a 00:03:02.906 LIB libspdk_blobfs.a 00:03:02.906 CC lib/nbd/nbd_rpc.o 00:03:02.906 CC lib/scsi/scsi_bdev.o 00:03:02.906 CC lib/nvmf/subsystem.o 00:03:02.906 CC lib/nvmf/nvmf.o 00:03:02.906 CC lib/scsi/scsi_pr.o 00:03:02.906 CC lib/ftl/ftl_debug.o 00:03:02.906 CC lib/ftl/ftl_io.o 00:03:03.163 CC lib/nvmf/nvmf_rpc.o 00:03:03.163 LIB libspdk_nbd.a 00:03:03.163 CC lib/nvmf/transport.o 00:03:03.163 CC lib/scsi/scsi_rpc.o 00:03:03.163 CC lib/scsi/task.o 00:03:03.163 CC lib/nvmf/tcp.o 00:03:03.163 CC lib/ftl/ftl_sb.o 00:03:03.420 CC lib/nvmf/stubs.o 00:03:03.420 CC lib/nvmf/mdns_server.o 00:03:03.420 LIB libspdk_scsi.a 00:03:03.420 CC lib/ftl/ftl_l2p.o 00:03:03.420 CC lib/nvmf/rdma.o 00:03:03.678 CC lib/ftl/ftl_l2p_flat.o 00:03:03.678 CC lib/nvmf/auth.o 00:03:03.678 CC lib/ftl/ftl_nv_cache.o 00:03:03.678 CC lib/ftl/ftl_band.o 00:03:03.678 CC lib/ftl/ftl_band_ops.o 00:03:03.935 CC lib/vhost/vhost.o 00:03:03.935 CC lib/iscsi/conn.o 00:03:04.201 CC lib/ftl/ftl_writer.o 00:03:04.201 CC lib/ftl/ftl_rq.o 00:03:04.201 CC lib/ftl/ftl_reloc.o 00:03:04.201 CC lib/ftl/ftl_l2p_cache.o 00:03:04.201 CC lib/iscsi/init_grp.o 00:03:04.201 CC lib/iscsi/iscsi.o 00:03:04.475 CC lib/ftl/ftl_p2l.o 00:03:04.475 CC lib/vhost/vhost_rpc.o 00:03:04.475 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.475 CC lib/iscsi/md5.o 00:03:04.475 CC lib/iscsi/param.o 00:03:04.734 CC lib/iscsi/portal_grp.o 00:03:04.734 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.734 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.734 CC lib/iscsi/tgt_node.o 00:03:04.734 CC lib/iscsi/iscsi_subsystem.o 00:03:04.734 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:04.993 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:04.993 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:04.993 CC lib/iscsi/iscsi_rpc.o 00:03:04.993 CC lib/vhost/vhost_scsi.o 00:03:04.993 CC lib/iscsi/task.o 00:03:04.993 CC lib/vhost/vhost_blk.o 00:03:04.993 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:04.993 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.251 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.251 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.251 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.251 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.251 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.251 CC lib/vhost/rte_vhost_user.o 00:03:05.251 CC lib/ftl/utils/ftl_conf.o 00:03:05.251 CC lib/ftl/utils/ftl_md.o 00:03:05.510 CC lib/ftl/utils/ftl_mempool.o 00:03:05.510 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.510 CC lib/ftl/utils/ftl_property.o 00:03:05.510 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.510 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:05.510 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:05.768 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:05.768 LIB libspdk_iscsi.a 00:03:05.768 LIB libspdk_nvmf.a 00:03:05.768 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:05.768 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:05.768 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:05.768 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:05.768 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:05.768 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:05.768 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:05.768 CC lib/ftl/base/ftl_base_dev.o 00:03:05.768 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.026 CC lib/ftl/ftl_trace.o 00:03:06.027 LIB libspdk_vhost.a 00:03:06.027 LIB libspdk_ftl.a 00:03:06.594 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.594 CC module/sock/posix/posix.o 00:03:06.594 CC module/keyring/file/keyring.o 00:03:06.594 CC module/accel/dsa/accel_dsa.o 00:03:06.594 CC module/accel/error/accel_error.o 00:03:06.594 CC module/accel/iaa/accel_iaa.o 00:03:06.594 CC module/keyring/linux/keyring.o 00:03:06.594 CC module/blob/bdev/blob_bdev.o 00:03:06.852 CC module/accel/ioat/accel_ioat.o 00:03:06.852 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.852 LIB libspdk_env_dpdk_rpc.a 00:03:06.852 CC module/keyring/linux/keyring_rpc.o 00:03:06.852 CC module/keyring/file/keyring_rpc.o 00:03:06.852 CC module/accel/error/accel_error_rpc.o 00:03:06.852 LIB libspdk_scheduler_dynamic.a 00:03:06.852 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.852 LIB libspdk_keyring_linux.a 00:03:06.852 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.852 LIB libspdk_keyring_file.a 00:03:06.852 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.852 LIB libspdk_blob_bdev.a 00:03:07.110 LIB libspdk_accel_error.a 00:03:07.110 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.110 LIB libspdk_accel_iaa.a 00:03:07.110 LIB libspdk_accel_dsa.a 00:03:07.110 LIB libspdk_accel_ioat.a 00:03:07.110 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.110 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.110 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.110 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.110 CC module/bdev/delay/vbdev_delay.o 00:03:07.110 CC module/bdev/gpt/gpt.o 00:03:07.110 CC module/bdev/error/vbdev_error.o 00:03:07.369 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.369 CC module/bdev/malloc/bdev_malloc.o 00:03:07.369 CC module/bdev/null/bdev_null.o 00:03:07.369 LIB libspdk_scheduler_gscheduler.a 00:03:07.369 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.369 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.369 LIB libspdk_blobfs_bdev.a 00:03:07.369 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.369 CC module/bdev/null/bdev_null_rpc.o 00:03:07.369 LIB libspdk_sock_posix.a 00:03:07.369 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.369 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.627 LIB libspdk_bdev_delay.a 00:03:07.627 LIB libspdk_bdev_null.a 00:03:07.627 LIB libspdk_bdev_error.a 00:03:07.627 LIB libspdk_bdev_malloc.a 00:03:07.627 LIB libspdk_bdev_gpt.a 00:03:07.627 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.627 CC module/bdev/nvme/bdev_nvme.o 00:03:07.627 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.627 CC module/bdev/nvme/nvme_rpc.o 00:03:07.627 CC module/bdev/raid/bdev_raid.o 00:03:07.627 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.627 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.885 CC module/bdev/split/vbdev_split.o 00:03:07.885 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.885 LIB libspdk_bdev_lvol.a 00:03:07.885 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.885 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.885 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.885 CC module/bdev/raid/raid0.o 00:03:07.885 CC module/bdev/raid/raid1.o 00:03:08.143 CC module/bdev/raid/concat.o 00:03:08.143 LIB libspdk_bdev_split.a 00:03:08.143 LIB libspdk_bdev_passthru.a 00:03:08.143 CC module/bdev/aio/bdev_aio.o 00:03:08.143 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.143 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.143 CC module/bdev/raid/raid5f.o 00:03:08.143 CC module/bdev/nvme/vbdev_opal.o 00:03:08.143 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.143 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:08.143 LIB libspdk_bdev_zone_block.a 00:03:08.402 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.402 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.402 LIB libspdk_bdev_aio.a 00:03:08.402 CC module/bdev/ftl/bdev_ftl.o 00:03:08.402 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.402 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.402 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.402 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.661 LIB libspdk_bdev_raid.a 00:03:08.661 LIB libspdk_bdev_ftl.a 00:03:08.661 LIB libspdk_bdev_iscsi.a 00:03:08.920 LIB libspdk_bdev_virtio.a 00:03:09.857 LIB libspdk_bdev_nvme.a 00:03:10.425 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.425 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.425 CC module/event/subsystems/vmd/vmd.o 00:03:10.426 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.426 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.426 CC module/event/subsystems/sock/sock.o 00:03:10.426 CC module/event/subsystems/keyring/keyring.o 00:03:10.426 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.426 LIB libspdk_event_keyring.a 00:03:10.426 LIB libspdk_event_scheduler.a 00:03:10.426 LIB libspdk_event_vmd.a 00:03:10.426 LIB libspdk_event_vhost_blk.a 00:03:10.426 LIB libspdk_event_sock.a 00:03:10.426 LIB libspdk_event_iobuf.a 00:03:10.684 CC module/event/subsystems/accel/accel.o 00:03:10.943 LIB libspdk_event_accel.a 00:03:11.202 CC module/event/subsystems/bdev/bdev.o 00:03:11.461 LIB libspdk_event_bdev.a 00:03:11.720 CC module/event/subsystems/nbd/nbd.o 00:03:11.720 CC module/event/subsystems/scsi/scsi.o 00:03:11.720 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.720 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.979 LIB libspdk_event_nbd.a 00:03:11.979 LIB libspdk_event_scsi.a 00:03:11.979 LIB libspdk_event_nvmf.a 00:03:12.239 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.239 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.498 LIB libspdk_event_vhost_scsi.a 00:03:12.498 LIB libspdk_event_iscsi.a 00:03:12.758 CC app/trace_record/trace_record.o 00:03:12.758 CXX app/trace/trace.o 00:03:12.758 CC app/nvmf_tgt/nvmf_main.o 00:03:12.758 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.758 CC app/spdk_tgt/spdk_tgt.o 00:03:12.758 CC examples/util/zipf/zipf.o 00:03:12.758 CC examples/ioat/perf/perf.o 00:03:13.016 CC test/thread/poller_perf/poller_perf.o 00:03:13.016 CC test/dma/test_dma/test_dma.o 00:03:13.016 CC test/app/bdev_svc/bdev_svc.o 00:03:13.016 LINK nvmf_tgt 00:03:13.016 LINK iscsi_tgt 00:03:13.016 LINK spdk_tgt 00:03:13.016 LINK zipf 00:03:13.016 LINK poller_perf 00:03:13.016 LINK spdk_trace_record 00:03:13.016 LINK ioat_perf 00:03:13.274 LINK bdev_svc 00:03:13.274 LINK spdk_trace 00:03:13.274 LINK test_dma 00:03:13.533 CC test/app/histogram_perf/histogram_perf.o 00:03:13.533 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.791 CC test/app/jsoncat/jsoncat.o 00:03:13.791 LINK histogram_perf 00:03:13.791 LINK jsoncat 00:03:13.791 CC examples/ioat/verify/verify.o 00:03:14.050 CC test/thread/lock/spdk_lock.o 00:03:14.050 LINK nvme_fuzz 00:03:14.050 LINK verify 00:03:14.309 CC test/app/stub/stub.o 00:03:14.309 TEST_HEADER include/spdk/accel.h 00:03:14.309 TEST_HEADER include/spdk/accel_module.h 00:03:14.309 TEST_HEADER include/spdk/assert.h 00:03:14.309 TEST_HEADER include/spdk/barrier.h 00:03:14.309 TEST_HEADER include/spdk/base64.h 00:03:14.309 TEST_HEADER include/spdk/bdev.h 00:03:14.309 TEST_HEADER include/spdk/bdev_module.h 00:03:14.309 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.309 TEST_HEADER include/spdk/bit_array.h 00:03:14.309 TEST_HEADER include/spdk/bit_pool.h 00:03:14.309 TEST_HEADER include/spdk/blob.h 00:03:14.309 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.309 TEST_HEADER include/spdk/blobfs.h 00:03:14.309 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.309 TEST_HEADER include/spdk/conf.h 00:03:14.309 TEST_HEADER include/spdk/config.h 00:03:14.309 TEST_HEADER include/spdk/cpuset.h 00:03:14.309 TEST_HEADER include/spdk/crc16.h 00:03:14.309 TEST_HEADER include/spdk/crc32.h 00:03:14.309 TEST_HEADER include/spdk/crc64.h 00:03:14.309 TEST_HEADER include/spdk/dif.h 00:03:14.309 TEST_HEADER include/spdk/dma.h 00:03:14.309 TEST_HEADER include/spdk/endian.h 00:03:14.309 TEST_HEADER include/spdk/env.h 00:03:14.309 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.309 TEST_HEADER include/spdk/event.h 00:03:14.309 TEST_HEADER include/spdk/fd.h 00:03:14.309 TEST_HEADER include/spdk/fd_group.h 00:03:14.309 TEST_HEADER include/spdk/file.h 00:03:14.567 TEST_HEADER include/spdk/ftl.h 00:03:14.567 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.567 TEST_HEADER include/spdk/hexlify.h 00:03:14.567 TEST_HEADER include/spdk/histogram_data.h 00:03:14.567 TEST_HEADER include/spdk/idxd.h 00:03:14.567 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.567 TEST_HEADER include/spdk/init.h 00:03:14.567 TEST_HEADER include/spdk/ioat.h 00:03:14.567 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.567 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.567 TEST_HEADER include/spdk/json.h 00:03:14.567 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.567 TEST_HEADER include/spdk/keyring.h 00:03:14.567 TEST_HEADER include/spdk/keyring_module.h 00:03:14.567 TEST_HEADER include/spdk/likely.h 00:03:14.567 TEST_HEADER include/spdk/log.h 00:03:14.567 TEST_HEADER include/spdk/lvol.h 00:03:14.567 TEST_HEADER include/spdk/memory.h 00:03:14.567 TEST_HEADER include/spdk/mmio.h 00:03:14.567 TEST_HEADER include/spdk/nbd.h 00:03:14.567 TEST_HEADER include/spdk/net.h 00:03:14.567 TEST_HEADER include/spdk/notify.h 00:03:14.567 TEST_HEADER include/spdk/nvme.h 00:03:14.567 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.567 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.567 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.567 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.567 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.567 TEST_HEADER include/spdk/nvmf.h 00:03:14.567 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.567 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.567 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.567 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.567 TEST_HEADER include/spdk/opal.h 00:03:14.567 TEST_HEADER include/spdk/opal_spec.h 00:03:14.567 TEST_HEADER include/spdk/pci_ids.h 00:03:14.567 TEST_HEADER include/spdk/pipe.h 00:03:14.567 TEST_HEADER include/spdk/queue.h 00:03:14.567 TEST_HEADER include/spdk/reduce.h 00:03:14.567 TEST_HEADER include/spdk/rpc.h 00:03:14.567 TEST_HEADER include/spdk/scheduler.h 00:03:14.567 TEST_HEADER include/spdk/scsi.h 00:03:14.567 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.567 TEST_HEADER include/spdk/sock.h 00:03:14.567 TEST_HEADER include/spdk/stdinc.h 00:03:14.567 TEST_HEADER include/spdk/string.h 00:03:14.567 TEST_HEADER include/spdk/thread.h 00:03:14.567 TEST_HEADER include/spdk/trace.h 00:03:14.567 TEST_HEADER include/spdk/trace_parser.h 00:03:14.567 TEST_HEADER include/spdk/tree.h 00:03:14.567 TEST_HEADER include/spdk/ublk.h 00:03:14.567 TEST_HEADER include/spdk/util.h 00:03:14.567 TEST_HEADER include/spdk/uuid.h 00:03:14.567 TEST_HEADER include/spdk/version.h 00:03:14.567 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.567 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.567 LINK stub 00:03:14.567 TEST_HEADER include/spdk/vhost.h 00:03:14.567 TEST_HEADER include/spdk/vmd.h 00:03:14.567 TEST_HEADER include/spdk/xor.h 00:03:14.567 TEST_HEADER include/spdk/zipf.h 00:03:14.567 CXX test/cpp_headers/accel.o 00:03:14.567 CXX test/cpp_headers/accel_module.o 00:03:14.825 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.825 CXX test/cpp_headers/assert.o 00:03:14.825 LINK interrupt_tgt 00:03:15.083 CXX test/cpp_headers/barrier.o 00:03:15.083 CXX test/cpp_headers/base64.o 00:03:15.342 CXX test/cpp_headers/bdev.o 00:03:15.342 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.342 CXX test/cpp_headers/bdev_module.o 00:03:15.601 CXX test/cpp_headers/bdev_zone.o 00:03:15.601 LINK spdk_lock 00:03:15.601 CXX test/cpp_headers/bit_array.o 00:03:15.860 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.860 CXX test/cpp_headers/bit_pool.o 00:03:15.860 CC test/rpc_client/rpc_client_test.o 00:03:15.860 CXX test/cpp_headers/blob.o 00:03:16.119 LINK rpc_client_test 00:03:16.119 CXX test/cpp_headers/blob_bdev.o 00:03:16.119 LINK mem_callbacks 00:03:16.119 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:16.119 CXX test/cpp_headers/blobfs.o 00:03:16.378 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.378 LINK histogram_ut 00:03:16.378 CC test/env/vtophys/vtophys.o 00:03:16.378 CXX test/cpp_headers/conf.o 00:03:16.637 CC test/accel/dif/dif.o 00:03:16.637 LINK vtophys 00:03:16.637 CXX test/cpp_headers/config.o 00:03:16.637 CXX test/cpp_headers/cpuset.o 00:03:16.637 CC test/blobfs/mkfs/mkfs.o 00:03:16.637 CC test/unit/lib/log/log.c/log_ut.o 00:03:16.637 CXX test/cpp_headers/crc16.o 00:03:16.897 LINK mkfs 00:03:16.897 CXX test/cpp_headers/crc32.o 00:03:16.897 LINK dif 00:03:16.897 LINK iscsi_fuzz 00:03:16.897 LINK log_ut 00:03:16.897 CC examples/thread/thread/thread_ex.o 00:03:16.897 CC examples/sock/hello_world/hello_sock.o 00:03:17.156 CXX test/cpp_headers/crc64.o 00:03:17.156 CXX test/cpp_headers/dif.o 00:03:17.156 CXX test/cpp_headers/dma.o 00:03:17.156 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.156 LINK thread 00:03:17.156 LINK hello_sock 00:03:17.415 CC test/event/event_perf/event_perf.o 00:03:17.415 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:17.415 CXX test/cpp_headers/endian.o 00:03:17.415 LINK env_dpdk_post_init 00:03:17.415 LINK event_perf 00:03:17.415 CXX test/cpp_headers/env.o 00:03:17.674 CC app/spdk_lspci/spdk_lspci.o 00:03:17.674 CXX test/cpp_headers/env_dpdk.o 00:03:17.674 LINK spdk_lspci 00:03:17.674 CXX test/cpp_headers/event.o 00:03:17.931 LINK common_ut 00:03:17.931 CXX test/cpp_headers/fd.o 00:03:18.188 CXX test/cpp_headers/fd_group.o 00:03:18.188 CC app/spdk_nvme_perf/perf.o 00:03:18.188 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:18.188 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:18.188 CXX test/cpp_headers/file.o 00:03:18.188 CC test/event/reactor/reactor.o 00:03:18.188 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:18.447 LINK reactor 00:03:18.447 CC test/env/memory/memory_ut.o 00:03:18.447 CXX test/cpp_headers/ftl.o 00:03:18.447 LINK base64_ut 00:03:18.447 CXX test/cpp_headers/gpt_spec.o 00:03:18.705 LINK vhost_fuzz 00:03:18.705 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:18.705 CXX test/cpp_headers/hexlify.o 00:03:18.963 CXX test/cpp_headers/histogram_data.o 00:03:18.963 CC test/lvol/esnap/esnap.o 00:03:18.963 LINK spdk_nvme_perf 00:03:18.963 CXX test/cpp_headers/idxd.o 00:03:19.221 LINK bit_array_ut 00:03:19.221 CXX test/cpp_headers/idxd_spec.o 00:03:19.221 CC test/event/reactor_perf/reactor_perf.o 00:03:19.221 LINK memory_ut 00:03:19.481 CXX test/cpp_headers/init.o 00:03:19.481 LINK reactor_perf 00:03:19.481 CC app/spdk_nvme_identify/identify.o 00:03:19.481 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:19.481 CXX test/cpp_headers/ioat.o 00:03:19.481 CC test/env/pci/pci_ut.o 00:03:19.740 CXX test/cpp_headers/ioat_spec.o 00:03:19.740 LINK cpuset_ut 00:03:19.740 CXX test/cpp_headers/iscsi_spec.o 00:03:19.999 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:19.999 LINK pci_ut 00:03:19.999 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.999 CXX test/cpp_headers/json.o 00:03:19.999 LINK crc16_ut 00:03:19.999 LINK lsvmd 00:03:20.258 CC test/event/app_repeat/app_repeat.o 00:03:20.258 CC test/event/scheduler/scheduler.o 00:03:20.258 CXX test/cpp_headers/jsonrpc.o 00:03:20.258 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:20.258 LINK app_repeat 00:03:20.258 CXX test/cpp_headers/keyring.o 00:03:20.258 LINK spdk_nvme_identify 00:03:20.517 LINK crc32_ieee_ut 00:03:20.517 LINK scheduler 00:03:20.517 CXX test/cpp_headers/keyring_module.o 00:03:20.517 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:20.517 CXX test/cpp_headers/likely.o 00:03:20.517 CC app/spdk_nvme_discover/discovery_aer.o 00:03:20.517 LINK crc32c_ut 00:03:20.517 CC app/spdk_top/spdk_top.o 00:03:20.831 CXX test/cpp_headers/log.o 00:03:20.831 CC app/vhost/vhost.o 00:03:20.831 LINK spdk_nvme_discover 00:03:20.831 CXX test/cpp_headers/lvol.o 00:03:20.831 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:20.831 LINK vhost 00:03:20.831 CXX test/cpp_headers/memory.o 00:03:21.089 LINK crc64_ut 00:03:21.089 CC examples/vmd/led/led.o 00:03:21.089 CXX test/cpp_headers/mmio.o 00:03:21.348 CXX test/cpp_headers/nbd.o 00:03:21.348 LINK led 00:03:21.348 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:21.348 CXX test/cpp_headers/net.o 00:03:21.348 CC test/unit/lib/util/file.c/file_ut.o 00:03:21.348 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:21.348 CXX test/cpp_headers/notify.o 00:03:21.607 LINK file_ut 00:03:21.607 LINK spdk_top 00:03:21.607 CXX test/cpp_headers/nvme.o 00:03:21.607 LINK iov_ut 00:03:21.607 CXX test/cpp_headers/nvme_intel.o 00:03:21.866 CC app/spdk_dd/spdk_dd.o 00:03:21.866 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.866 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:22.124 CC examples/idxd/perf/perf.o 00:03:22.124 CXX test/cpp_headers/nvme_spec.o 00:03:22.124 CXX test/cpp_headers/nvme_zns.o 00:03:22.124 LINK spdk_dd 00:03:22.124 CC app/fio/nvme/fio_plugin.o 00:03:22.382 LINK dif_ut 00:03:22.641 CXX test/cpp_headers/nvmf.o 00:03:22.641 CC test/unit/lib/util/math.c/math_ut.o 00:03:22.641 LINK idxd_perf 00:03:22.641 CC test/unit/lib/util/net.c/net_ut.o 00:03:22.641 LINK math_ut 00:03:22.641 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.641 LINK net_ut 00:03:22.900 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.900 CXX test/cpp_headers/nvmf_spec.o 00:03:22.900 LINK spdk_nvme 00:03:22.900 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:22.900 CC test/unit/lib/util/string.c/string_ut.o 00:03:23.159 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:23.159 CXX test/cpp_headers/nvmf_transport.o 00:03:23.159 CXX test/cpp_headers/opal.o 00:03:23.416 CC examples/accel/perf/accel_perf.o 00:03:23.416 LINK string_ut 00:03:23.416 CXX test/cpp_headers/opal_spec.o 00:03:23.674 LINK xor_ut 00:03:23.674 CXX test/cpp_headers/pci_ids.o 00:03:23.674 CXX test/cpp_headers/pipe.o 00:03:23.932 LINK pipe_ut 00:03:23.932 LINK accel_perf 00:03:23.932 CXX test/cpp_headers/queue.o 00:03:23.932 CC app/fio/bdev/fio_plugin.o 00:03:23.932 CXX test/cpp_headers/reduce.o 00:03:23.932 CC test/nvme/aer/aer.o 00:03:23.932 CC test/nvme/reset/reset.o 00:03:24.190 CC examples/blob/hello_world/hello_blob.o 00:03:24.190 CXX test/cpp_headers/rpc.o 00:03:24.190 LINK esnap 00:03:24.190 CXX test/cpp_headers/scheduler.o 00:03:24.190 LINK reset 00:03:24.190 LINK hello_blob 00:03:24.448 LINK aer 00:03:24.448 CXX test/cpp_headers/scsi.o 00:03:24.448 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:24.448 CXX test/cpp_headers/scsi_spec.o 00:03:24.448 LINK spdk_bdev 00:03:24.448 CC test/bdev/bdevio/bdevio.o 00:03:24.707 CXX test/cpp_headers/sock.o 00:03:24.707 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:24.707 CXX test/cpp_headers/stdinc.o 00:03:24.707 CXX test/cpp_headers/string.o 00:03:24.966 LINK bdevio 00:03:24.966 CXX test/cpp_headers/thread.o 00:03:24.966 CXX test/cpp_headers/trace.o 00:03:24.966 CC examples/blob/cli/blobcli.o 00:03:25.225 LINK dma_ut 00:03:25.225 CXX test/cpp_headers/trace_parser.o 00:03:25.225 LINK ioat_ut 00:03:25.225 CC examples/nvme/hello_world/hello_world.o 00:03:25.225 CXX test/cpp_headers/tree.o 00:03:25.225 CXX test/cpp_headers/ublk.o 00:03:25.225 CXX test/cpp_headers/util.o 00:03:25.483 CC test/nvme/sgl/sgl.o 00:03:25.483 CXX test/cpp_headers/uuid.o 00:03:25.483 LINK hello_world 00:03:25.483 LINK blobcli 00:03:25.483 CXX test/cpp_headers/version.o 00:03:25.483 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.483 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:25.742 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:25.742 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:25.742 LINK sgl 00:03:25.742 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.742 CXX test/cpp_headers/vhost.o 00:03:26.000 CXX test/cpp_headers/vmd.o 00:03:26.000 LINK pci_event_ut 00:03:26.258 CXX test/cpp_headers/xor.o 00:03:26.258 LINK idxd_user_ut 00:03:26.258 CXX test/cpp_headers/zipf.o 00:03:26.517 CC examples/nvme/reconnect/reconnect.o 00:03:26.517 LINK idxd_ut 00:03:26.517 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:26.517 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:26.517 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:26.775 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:26.775 CC examples/nvme/arbitration/arbitration.o 00:03:26.775 LINK reconnect 00:03:27.034 LINK nvme_manage 00:03:27.034 CC test/nvme/e2edp/nvme_dp.o 00:03:27.034 LINK json_util_ut 00:03:27.034 LINK arbitration 00:03:27.292 LINK nvme_dp 00:03:27.292 CC test/nvme/overhead/overhead.o 00:03:27.292 LINK json_write_ut 00:03:27.550 LINK overhead 00:03:27.809 CC test/nvme/err_injection/err_injection.o 00:03:27.809 CC examples/nvme/hotplug/hotplug.o 00:03:28.067 LINK hotplug 00:03:28.067 LINK err_injection 00:03:28.067 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:28.325 LINK cmb_copy 00:03:28.325 CC examples/nvme/abort/abort.o 00:03:28.325 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.584 CC test/nvme/startup/startup.o 00:03:28.584 CC test/nvme/reserve/reserve.o 00:03:28.584 CC test/nvme/simple_copy/simple_copy.o 00:03:28.584 LINK json_parse_ut 00:03:28.842 LINK abort 00:03:28.842 LINK pmr_persistence 00:03:28.842 LINK startup 00:03:28.842 LINK reserve 00:03:28.842 LINK simple_copy 00:03:29.100 CC test/nvme/connect_stress/connect_stress.o 00:03:29.100 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:29.101 CC test/nvme/boot_partition/boot_partition.o 00:03:29.359 CC test/nvme/compliance/nvme_compliance.o 00:03:29.359 LINK boot_partition 00:03:29.359 LINK connect_stress 00:03:29.359 LINK jsonrpc_server_ut 00:03:29.618 LINK nvme_compliance 00:03:29.618 CC examples/bdev/hello_world/hello_bdev.o 00:03:29.877 CC examples/bdev/bdevperf/bdevperf.o 00:03:29.877 LINK hello_bdev 00:03:29.877 CC test/nvme/fused_ordering/fused_ordering.o 00:03:29.877 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:30.136 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:30.136 LINK fused_ordering 00:03:30.136 LINK doorbell_aers 00:03:30.136 CC test/nvme/cuse/cuse.o 00:03:30.394 CC test/nvme/fdp/fdp.o 00:03:30.652 LINK bdevperf 00:03:30.653 LINK fdp 00:03:30.911 LINK rpc_ut 00:03:31.170 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:31.170 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:31.429 LINK cuse 00:03:31.429 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:31.429 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:31.429 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:31.429 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:31.995 LINK keyring_ut 00:03:31.995 LINK notify_ut 00:03:32.563 LINK iobuf_ut 00:03:32.563 LINK posix_ut 00:03:32.822 LINK sock_ut 00:03:33.389 LINK thread_ut 00:03:33.389 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:33.389 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:33.389 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:33.389 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:33.389 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:33.389 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:33.389 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:33.389 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:33.648 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:33.648 CC examples/nvmf/nvmf/nvmf.o 00:03:33.906 LINK nvmf 00:03:34.165 LINK nvme_ns_ut 00:03:34.424 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:34.424 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:34.424 LINK nvme_poll_group_ut 00:03:34.683 LINK nvme_ctrlr_cmd_ut 00:03:34.683 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:34.683 LINK nvme_ut 00:03:34.683 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:34.683 LINK nvme_ns_ocssd_cmd_ut 00:03:34.683 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:34.942 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:35.200 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:35.200 LINK nvme_ns_cmd_ut 00:03:35.200 LINK nvme_quirks_ut 00:03:35.200 LINK nvme_pcie_ut 00:03:35.459 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:35.459 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:35.717 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:35.717 LINK nvme_qpair_ut 00:03:35.717 LINK blob_bdev_ut 00:03:35.981 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:35.981 LINK nvme_transport_ut 00:03:35.981 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:36.249 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:36.509 LINK nvme_io_msg_ut 00:03:36.509 LINK nvme_ctrlr_ut 00:03:36.767 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:36.767 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:37.026 LINK nvme_fabric_ut 00:03:37.026 LINK nvme_opal_ut 00:03:37.026 LINK nvme_pcie_common_ut 00:03:37.026 LINK subsystem_ut 00:03:37.026 LINK accel_ut 00:03:37.026 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:37.284 LINK nvme_tcp_ut 00:03:37.543 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:37.543 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:37.543 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:37.543 LINK rpc_ut 00:03:37.543 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:37.802 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:37.802 LINK scsi_nvme_ut 00:03:37.802 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:38.060 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:38.060 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:38.060 LINK gpt_ut 00:03:38.320 LINK bdev_zone_ut 00:03:38.320 LINK nvme_cuse_ut 00:03:38.579 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:38.579 CC test/unit/lib/event/app.c/app_ut.o 00:03:38.579 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:38.838 LINK nvme_rdma_ut 00:03:39.097 LINK vbdev_lvol_ut 00:03:39.355 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:39.355 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:39.355 LINK app_ut 00:03:39.355 LINK vbdev_zone_block_ut 00:03:39.612 LINK reactor_ut 00:03:39.870 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:39.870 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:39.870 LINK bdev_raid_ut 00:03:39.870 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:40.129 LINK bdev_raid_sb_ut 00:03:40.387 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:40.387 LINK concat_ut 00:03:40.387 LINK raid1_ut 00:03:40.645 LINK raid0_ut 00:03:40.903 LINK part_ut 00:03:41.470 LINK raid5f_ut 00:03:41.470 LINK bdev_ut 00:03:42.405 LINK blob_ut 00:03:42.664 LINK bdev_ut 00:03:42.924 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:42.924 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:42.924 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:42.924 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:42.924 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:43.183 LINK blobfs_bdev_ut 00:03:43.183 LINK tree_ut 00:03:43.749 LINK bdev_nvme_ut 00:03:44.007 LINK blobfs_sync_ut 00:03:44.265 LINK blobfs_async_ut 00:03:44.265 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:44.265 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:44.265 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:44.265 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:44.265 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:44.265 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:44.265 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:44.523 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:44.523 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:44.781 LINK dev_ut 00:03:44.781 LINK lvol_ut 00:03:45.040 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:45.299 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:45.299 LINK ftl_bitmap_ut 00:03:45.299 LINK ftl_l2p_ut 00:03:45.299 LINK ftl_io_ut 00:03:45.557 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:45.557 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:45.557 LINK scsi_ut 00:03:45.816 LINK ftl_p2l_ut 00:03:45.816 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:45.816 LINK lun_ut 00:03:45.816 LINK ftl_band_ut 00:03:46.074 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:46.074 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:46.074 LINK ftl_mempool_ut 00:03:46.361 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:46.362 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:46.362 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:46.620 LINK subsystem_ut 00:03:46.620 LINK ftl_mngt_ut 00:03:46.880 LINK ctrlr_bdev_ut 00:03:46.880 LINK scsi_pr_ut 00:03:46.880 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:46.880 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:47.139 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:47.139 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:47.139 LINK scsi_bdev_ut 00:03:47.399 LINK ctrlr_ut 00:03:47.399 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:47.658 LINK ftl_sb_ut 00:03:47.658 LINK ctrlr_discovery_ut 00:03:47.658 LINK ftl_layout_upgrade_ut 00:03:47.658 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:47.917 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:47.917 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:47.917 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:48.176 LINK init_grp_ut 00:03:48.176 LINK nvmf_ut 00:03:48.436 LINK param_ut 00:03:48.436 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:48.436 LINK tcp_ut 00:03:48.436 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:48.695 LINK auth_ut 00:03:48.955 LINK conn_ut 00:03:49.521 LINK portal_grp_ut 00:03:49.521 LINK tgt_node_ut 00:03:50.088 LINK iscsi_ut 00:03:50.088 LINK transport_ut 00:03:50.347 LINK vhost_ut 00:03:50.606 LINK rdma_ut 00:03:50.864 00:03:50.864 real 2m3.332s 00:03:50.864 user 9m26.110s 00:03:50.864 sys 2m31.199s 00:03:50.864 18:30:51 unittest_build -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:50.864 18:30:51 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:50.864 ************************************ 00:03:50.864 END TEST unittest_build 00:03:50.864 ************************************ 00:03:50.864 18:30:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:50.864 18:30:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:50.864 18:30:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:50.864 18:30:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.864 18:30:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:50.864 18:30:51 -- pm/common@44 -- $ pid=2188 00:03:50.864 18:30:51 -- pm/common@50 -- $ kill -TERM 2188 00:03:50.864 18:30:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.864 18:30:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:50.864 18:30:51 -- pm/common@44 -- $ pid=2190 00:03:50.864 18:30:51 -- pm/common@50 -- $ kill -TERM 2190 00:03:51.123 18:30:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:51.123 18:30:51 -- nvmf/common.sh@7 -- # uname -s 00:03:51.123 18:30:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.123 18:30:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.123 18:30:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.123 18:30:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.123 18:30:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.123 18:30:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.123 18:30:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.123 18:30:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.123 18:30:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.123 18:30:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.123 18:30:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4da1e20b-eecc-4a6c-8d29-ab223dcea39a 00:03:51.124 18:30:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=4da1e20b-eecc-4a6c-8d29-ab223dcea39a 00:03:51.124 18:30:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.124 18:30:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.124 18:30:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:51.124 18:30:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.124 18:30:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:51.124 18:30:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.124 18:30:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.124 18:30:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.124 18:30:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.124 18:30:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.124 18:30:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.124 18:30:51 -- paths/export.sh@5 -- # export PATH 00:03:51.124 18:30:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:51.124 18:30:51 -- nvmf/common.sh@47 -- # : 0 00:03:51.124 18:30:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:51.124 18:30:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:51.124 18:30:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.124 18:30:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.124 18:30:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.124 18:30:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:51.124 18:30:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:51.124 18:30:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:51.124 18:30:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.124 18:30:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.124 18:30:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.124 18:30:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:51.124 18:30:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.124 18:30:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.124 18:30:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.124 18:30:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.124 18:30:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.124 18:30:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:51.124 18:30:51 -- spdk/autotest.sh@48 -- # udevadm_pid=100143 00:03:51.124 18:30:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.124 18:30:51 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:51.124 18:30:51 -- pm/common@17 -- # local monitor 00:03:51.124 18:30:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.124 18:30:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.124 18:30:51 -- pm/common@25 -- # sleep 1 00:03:51.124 18:30:51 -- pm/common@21 -- # date +%s 00:03:51.124 18:30:51 -- pm/common@21 -- # date +%s 00:03:51.124 18:30:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721932251 00:03:51.124 18:30:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721932251 00:03:51.124 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721932251_collect-vmstat.pm.log 00:03:51.124 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721932251_collect-cpu-load.pm.log 00:03:52.061 18:30:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.061 18:30:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:52.061 18:30:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.061 18:30:52 -- common/autotest_common.sh@10 -- # set +x 00:03:52.061 18:30:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:52.061 18:30:52 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:52.061 18:30:52 -- common/autotest_common.sh@10 -- # set +x 00:03:52.061 18:30:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:52.320 18:30:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:52.320 18:30:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:52.320 18:30:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:52.320 18:30:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:52.320 18:30:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.320 18:30:52 -- common/autotest_common.sh@1455 -- # uname 00:03:52.320 18:30:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:52.320 18:30:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.320 18:30:52 -- common/autotest_common.sh@1475 -- # uname 00:03:52.320 18:30:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:52.320 18:30:52 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:52.320 18:30:52 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:52.320 18:30:52 -- spdk/autotest.sh@72 -- # hash lcov 00:03:52.320 18:30:52 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:52.320 18:30:52 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:52.320 --rc lcov_branch_coverage=1 00:03:52.320 --rc lcov_function_coverage=1 00:03:52.320 --rc genhtml_branch_coverage=1 00:03:52.320 --rc genhtml_function_coverage=1 00:03:52.320 --rc genhtml_legend=1 00:03:52.320 --rc geninfo_all_blocks=1 00:03:52.320 ' 00:03:52.320 18:30:52 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:52.320 --rc lcov_branch_coverage=1 00:03:52.320 --rc lcov_function_coverage=1 00:03:52.320 --rc genhtml_branch_coverage=1 00:03:52.320 --rc genhtml_function_coverage=1 00:03:52.320 --rc genhtml_legend=1 00:03:52.320 --rc geninfo_all_blocks=1 00:03:52.320 ' 00:03:52.320 18:30:52 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:52.320 --rc lcov_branch_coverage=1 00:03:52.320 --rc lcov_function_coverage=1 00:03:52.320 --rc genhtml_branch_coverage=1 00:03:52.320 --rc genhtml_function_coverage=1 00:03:52.320 --rc genhtml_legend=1 00:03:52.320 --rc geninfo_all_blocks=1 00:03:52.320 --no-external' 00:03:52.320 18:30:52 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:52.320 --rc lcov_branch_coverage=1 00:03:52.320 --rc lcov_function_coverage=1 00:03:52.320 --rc genhtml_branch_coverage=1 00:03:52.320 --rc genhtml_function_coverage=1 00:03:52.321 --rc genhtml_legend=1 00:03:52.321 --rc geninfo_all_blocks=1 00:03:52.321 --no-external' 00:03:52.321 18:30:52 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:52.321 lcov: LCOV version 1.15 00:03:52.321 18:30:52 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:57.594 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:57.594 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:36.316 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:36.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:36.317 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:36.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:37.254 18:31:37 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:37.254 18:31:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.254 18:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.254 18:31:37 -- spdk/autotest.sh@91 -- # rm -f 00:04:37.254 18:31:37 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:37.821 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:37.821 18:31:38 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:37.821 18:31:38 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.821 18:31:38 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.821 18:31:38 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.821 18:31:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.821 18:31:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.821 18:31:38 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.821 18:31:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.821 18:31:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.821 18:31:38 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:37.821 18:31:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.821 18:31:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.821 18:31:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:37.821 18:31:38 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:37.821 18:31:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:38.080 No valid GPT data, bailing 00:04:38.080 18:31:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.080 18:31:38 -- scripts/common.sh@391 -- # pt= 00:04:38.080 18:31:38 -- scripts/common.sh@392 -- # return 1 00:04:38.080 18:31:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:38.080 1+0 records in 00:04:38.080 1+0 records out 00:04:38.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613276 s, 171 MB/s 00:04:38.080 18:31:38 -- spdk/autotest.sh@118 -- # sync 00:04:38.080 18:31:38 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:38.080 18:31:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:38.080 18:31:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.986 18:31:40 -- spdk/autotest.sh@124 -- # uname -s 00:04:39.986 18:31:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:39.986 18:31:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.986 18:31:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.986 18:31:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.986 18:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:39.986 ************************************ 00:04:39.986 START TEST setup.sh 00:04:39.986 ************************************ 00:04:39.986 18:31:40 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.986 * Looking for test storage... 00:04:39.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.986 18:31:40 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:39.986 18:31:40 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.986 18:31:40 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.986 18:31:40 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.986 18:31:40 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.986 18:31:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.986 ************************************ 00:04:39.986 START TEST acl 00:04:39.986 ************************************ 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.986 * Looking for test storage... 00:04:39.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.986 18:31:40 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.986 18:31:40 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.986 18:31:40 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:39.986 18:31:40 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:39.986 18:31:40 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:39.986 18:31:40 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.986 18:31:40 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:39.986 18:31:40 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.986 18:31:40 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.553 18:31:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:40.553 18:31:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:40.553 18:31:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.553 18:31:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:40.553 18:31:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.553 18:31:40 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.121 Hugepages 00:04:41.121 node hugesize free / total 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.121 00:04:41.121 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:41.121 18:31:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:41.380 18:31:41 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:41.380 18:31:41 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.380 18:31:41 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.380 18:31:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:41.380 ************************************ 00:04:41.380 START TEST denied 00:04:41.380 ************************************ 00:04:41.380 18:31:41 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:41.380 18:31:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:41.380 18:31:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:41.380 18:31:41 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:41.380 18:31:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.381 18:31:41 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.759 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.759 18:31:43 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.325 00:04:43.325 real 0m1.957s 00:04:43.325 user 0m0.500s 00:04:43.325 sys 0m1.530s 00:04:43.325 18:31:43 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.325 18:31:43 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:43.325 ************************************ 00:04:43.325 END TEST denied 00:04:43.325 ************************************ 00:04:43.325 18:31:43 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:43.325 18:31:43 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.325 18:31:43 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.325 18:31:43 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.325 ************************************ 00:04:43.325 START TEST allowed 00:04:43.325 ************************************ 00:04:43.325 18:31:43 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:43.325 18:31:43 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:43.325 18:31:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:43.325 18:31:43 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:43.325 18:31:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.325 18:31:43 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.858 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.858 18:31:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:45.858 18:31:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:45.858 18:31:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:45.858 18:31:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.858 18:31:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.428 00:04:46.428 real 0m3.051s 00:04:46.428 user 0m0.491s 00:04:46.428 sys 0m2.588s 00:04:46.428 18:31:46 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.428 18:31:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:46.428 ************************************ 00:04:46.428 END TEST allowed 00:04:46.428 ************************************ 00:04:46.428 00:04:46.428 real 0m6.633s 00:04:46.428 user 0m1.670s 00:04:46.428 sys 0m5.159s 00:04:46.428 18:31:46 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.428 18:31:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:46.428 ************************************ 00:04:46.428 END TEST acl 00:04:46.428 ************************************ 00:04:46.428 18:31:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:46.428 18:31:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.428 18:31:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.428 18:31:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.428 ************************************ 00:04:46.428 START TEST hugepages 00:04:46.428 ************************************ 00:04:46.428 18:31:46 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:46.689 * Looking for test storage... 00:04:46.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 2611528 kB' 'MemAvailable: 7400412 kB' 'Buffers: 36068 kB' 'Cached: 4880604 kB' 'SwapCached: 0 kB' 'Active: 1034388 kB' 'Inactive: 4000960 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 129304 kB' 'Active(file): 1033352 kB' 'Inactive(file): 3871656 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 148116 kB' 'Mapped: 68360 kB' 'Shmem: 2600 kB' 'KReclaimable: 205180 kB' 'Slab: 271624 kB' 'SReclaimable: 205180 kB' 'SUnreclaim: 66444 kB' 'KernelStack: 4468 kB' 'PageTables: 3512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 483784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.689 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.690 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.691 18:31:47 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:46.691 18:31:47 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.691 18:31:47 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.691 18:31:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.691 ************************************ 00:04:46.691 START TEST default_setup 00:04:46.691 ************************************ 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.691 18:31:47 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:47.263 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4695160 kB' 'MemAvailable: 9484092 kB' 'Buffers: 36076 kB' 'Cached: 4880672 kB' 'SwapCached: 0 kB' 'Active: 1034452 kB' 'Inactive: 4016240 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144552 kB' 'Active(file): 1033400 kB' 'Inactive(file): 3871688 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 163208 kB' 'Mapped: 68136 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271248 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66100 kB' 'KernelStack: 4336 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.648 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4694908 kB' 'MemAvailable: 9483840 kB' 'Buffers: 36076 kB' 'Cached: 4880672 kB' 'SwapCached: 0 kB' 'Active: 1034456 kB' 'Inactive: 4016456 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 144768 kB' 'Active(file): 1033400 kB' 'Inactive(file): 3871688 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 163440 kB' 'Mapped: 68096 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271248 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66100 kB' 'KernelStack: 4320 kB' 'PageTables: 3352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.649 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.650 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4695156 kB' 'MemAvailable: 9484092 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034452 kB' 'Inactive: 4016084 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 144396 kB' 'Active(file): 1033404 kB' 'Inactive(file): 3871688 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 163040 kB' 'Mapped: 68076 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271272 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66124 kB' 'KernelStack: 4304 kB' 'PageTables: 3288 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.651 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.652 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.653 nr_hugepages=1024 00:04:48.653 resv_hugepages=0 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.653 surplus_hugepages=0 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.653 anon_hugepages=0 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4694400 kB' 'MemAvailable: 9483340 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034456 kB' 'Inactive: 4016460 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 144772 kB' 'Active(file): 1033408 kB' 'Inactive(file): 3871688 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 756 kB' 'Writeback: 0 kB' 'AnonPages: 163440 kB' 'Mapped: 68076 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271304 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66156 kB' 'KernelStack: 4368 kB' 'PageTables: 3480 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.653 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.654 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4694400 kB' 'MemUsed: 7548576 kB' 'SwapCached: 0 kB' 'Active: 1034456 kB' 'Inactive: 4015900 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 144212 kB' 'Active(file): 1033408 kB' 'Inactive(file): 3871688 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 756 kB' 'Writeback: 0 kB' 'FilePages: 4916756 kB' 'Mapped: 68076 kB' 'AnonPages: 162880 kB' 'Shmem: 2596 kB' 'KernelStack: 4420 kB' 'PageTables: 3432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 205148 kB' 'Slab: 271304 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.655 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.656 node0=1024 expecting 1024 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.656 00:04:48.656 real 0m1.755s 00:04:48.656 user 0m0.388s 00:04:48.656 sys 0m1.383s 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.656 18:31:48 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:48.656 ************************************ 00:04:48.656 END TEST default_setup 00:04:48.656 ************************************ 00:04:48.656 18:31:48 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:48.656 18:31:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.656 18:31:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.656 18:31:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.656 ************************************ 00:04:48.656 START TEST per_node_1G_alloc 00:04:48.656 ************************************ 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.656 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:48.915 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5739052 kB' 'MemAvailable: 10527988 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034472 kB' 'Inactive: 4016148 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144468 kB' 'Active(file): 1033412 kB' 'Inactive(file): 3871680 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 163140 kB' 'Mapped: 68364 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271256 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66108 kB' 'KernelStack: 4416 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.487 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5739308 kB' 'MemAvailable: 10528244 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034472 kB' 'Inactive: 4016072 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144392 kB' 'Active(file): 1033412 kB' 'Inactive(file): 3871680 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 163064 kB' 'Mapped: 68144 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271256 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66108 kB' 'KernelStack: 4400 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.488 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.489 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5739572 kB' 'MemAvailable: 10528508 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034472 kB' 'Inactive: 4016332 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144652 kB' 'Active(file): 1033412 kB' 'Inactive(file): 3871680 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 163324 kB' 'Mapped: 68144 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271256 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66108 kB' 'KernelStack: 4400 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.490 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:49 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.491 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.492 nr_hugepages=512 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:49.492 resv_hugepages=0 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.492 surplus_hugepages=0 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.492 anon_hugepages=0 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5739560 kB' 'MemAvailable: 10528496 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034472 kB' 'Inactive: 4015980 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144300 kB' 'Active(file): 1033412 kB' 'Inactive(file): 3871680 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 163180 kB' 'Mapped: 68104 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271256 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66108 kB' 'KernelStack: 4340 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.492 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.493 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.494 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5739560 kB' 'MemUsed: 6503416 kB' 'SwapCached: 0 kB' 'Active: 1034472 kB' 'Inactive: 4016240 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144560 kB' 'Active(file): 1033412 kB' 'Inactive(file): 3871680 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 4916752 kB' 'Mapped: 68104 kB' 'AnonPages: 162920 kB' 'Shmem: 2596 kB' 'KernelStack: 4408 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 205148 kB' 'Slab: 271256 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.754 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.755 node0=512 expecting 512 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:49.755 00:04:49.755 real 0m1.056s 00:04:49.755 user 0m0.278s 00:04:49.755 sys 0m0.831s 00:04:49.755 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.756 18:31:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.756 ************************************ 00:04:49.756 END TEST per_node_1G_alloc 00:04:49.756 ************************************ 00:04:49.756 18:31:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:49.756 18:31:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.756 18:31:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.756 18:31:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.756 ************************************ 00:04:49.756 START TEST even_2G_alloc 00:04:49.756 ************************************ 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.756 18:31:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:50.014 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.951 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693460 kB' 'MemAvailable: 9482396 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034504 kB' 'Inactive: 4015940 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144292 kB' 'Active(file): 1033444 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 162916 kB' 'Mapped: 68088 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271656 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66508 kB' 'KernelStack: 4372 kB' 'PageTables: 3324 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.952 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693724 kB' 'MemAvailable: 9482664 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034504 kB' 'Inactive: 4016120 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144468 kB' 'Active(file): 1033444 kB' 'Inactive(file): 3871652 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 163112 kB' 'Mapped: 68088 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271656 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66508 kB' 'KernelStack: 4420 kB' 'PageTables: 3432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.953 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.954 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693724 kB' 'MemAvailable: 9482664 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034504 kB' 'Inactive: 4016004 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144352 kB' 'Active(file): 1033444 kB' 'Inactive(file): 3871652 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 162984 kB' 'Mapped: 68128 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271656 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66508 kB' 'KernelStack: 4356 kB' 'PageTables: 3276 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.955 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.956 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.217 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.217 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.217 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.217 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.218 nr_hugepages=1024 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.218 resv_hugepages=0 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.218 surplus_hugepages=0 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.218 anon_hugepages=0 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693964 kB' 'MemAvailable: 9482900 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034500 kB' 'Inactive: 4015724 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 144076 kB' 'Active(file): 1033444 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'AnonPages: 162952 kB' 'Mapped: 68088 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271656 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66508 kB' 'KernelStack: 4344 kB' 'PageTables: 3332 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.218 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.219 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693460 kB' 'MemUsed: 7549516 kB' 'SwapCached: 0 kB' 'Active: 1034500 kB' 'Inactive: 4015928 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 144280 kB' 'Active(file): 1033444 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 344 kB' 'Writeback: 0 kB' 'FilePages: 4916752 kB' 'Mapped: 68088 kB' 'AnonPages: 163156 kB' 'Shmem: 2596 kB' 'KernelStack: 4380 kB' 'PageTables: 3236 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 205148 kB' 'Slab: 271656 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.220 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.221 node0=1024 expecting 1024 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.221 00:04:51.221 real 0m1.439s 00:04:51.221 user 0m0.362s 00:04:51.221 sys 0m1.128s 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.221 18:31:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.221 ************************************ 00:04:51.221 END TEST even_2G_alloc 00:04:51.221 ************************************ 00:04:51.221 18:31:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:51.221 18:31:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.221 18:31:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.221 18:31:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.221 ************************************ 00:04:51.221 START TEST odd_alloc 00:04:51.221 ************************************ 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.221 18:31:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:51.738 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.998 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693148 kB' 'MemAvailable: 9482088 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034500 kB' 'Inactive: 4013160 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141512 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160208 kB' 'Mapped: 67212 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271224 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66076 kB' 'KernelStack: 4412 kB' 'PageTables: 3592 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.261 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693560 kB' 'MemAvailable: 9482496 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034504 kB' 'Inactive: 4012916 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141272 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871644 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 159860 kB' 'Mapped: 67212 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 270960 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65812 kB' 'KernelStack: 4316 kB' 'PageTables: 3168 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.262 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.263 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4693588 kB' 'MemAvailable: 9482524 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034496 kB' 'Inactive: 4012860 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141216 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871644 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 159820 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 270992 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65844 kB' 'KernelStack: 4304 kB' 'PageTables: 3292 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.264 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.265 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.266 nr_hugepages=1025 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:52.266 resv_hugepages=0 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.266 surplus_hugepages=0 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.266 anon_hugepages=0 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4694344 kB' 'MemAvailable: 9483280 kB' 'Buffers: 36076 kB' 'Cached: 4880676 kB' 'SwapCached: 0 kB' 'Active: 1034496 kB' 'Inactive: 4012832 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141188 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871644 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 160052 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 270992 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65844 kB' 'KernelStack: 4288 kB' 'PageTables: 3244 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.266 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4694364 kB' 'MemUsed: 7548612 kB' 'SwapCached: 0 kB' 'Active: 1034496 kB' 'Inactive: 4012832 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141188 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871644 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'FilePages: 4916752 kB' 'Mapped: 67196 kB' 'AnonPages: 159792 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3504 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 205148 kB' 'Slab: 270992 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.267 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.268 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.269 node0=1025 expecting 1025 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:52.269 00:04:52.269 real 0m1.049s 00:04:52.269 user 0m0.326s 00:04:52.269 sys 0m0.779s 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.269 18:31:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.269 ************************************ 00:04:52.269 END TEST odd_alloc 00:04:52.269 ************************************ 00:04:52.269 18:31:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:52.269 18:31:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.269 18:31:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.269 18:31:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.269 ************************************ 00:04:52.269 START TEST custom_alloc 00:04:52.269 ************************************ 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.269 18:31:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:52.837 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:53.100 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:53.100 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:53.100 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.100 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.100 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5745624 kB' 'MemAvailable: 10534564 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034504 kB' 'Inactive: 4013476 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141828 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 360 kB' 'Writeback: 0 kB' 'AnonPages: 160648 kB' 'Mapped: 67484 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271048 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65900 kB' 'KernelStack: 4424 kB' 'PageTables: 3592 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.101 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5745612 kB' 'MemAvailable: 10534552 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034504 kB' 'Inactive: 4013076 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141428 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 160300 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271048 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65900 kB' 'KernelStack: 4312 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.102 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.103 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5745844 kB' 'MemAvailable: 10534784 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034504 kB' 'Inactive: 4013092 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141444 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 160072 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271128 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65980 kB' 'KernelStack: 4332 kB' 'PageTables: 3120 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.104 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.105 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.106 nr_hugepages=512 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:53.106 resv_hugepages=0 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.106 surplus_hugepages=0 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.106 anon_hugepages=0 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5745620 kB' 'MemAvailable: 10534560 kB' 'Buffers: 36076 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034492 kB' 'Inactive: 4012996 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141348 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 159968 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271176 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66028 kB' 'KernelStack: 4292 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.106 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.107 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5745368 kB' 'MemUsed: 6497608 kB' 'SwapCached: 0 kB' 'Active: 1034496 kB' 'Inactive: 4013220 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141572 kB' 'Active(file): 1033448 kB' 'Inactive(file): 3871648 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'FilePages: 4916756 kB' 'Mapped: 67196 kB' 'AnonPages: 160216 kB' 'Shmem: 2596 kB' 'KernelStack: 4336 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 205148 kB' 'Slab: 271176 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.108 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.109 node0=512 expecting 512 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.109 00:04:53.109 real 0m0.857s 00:04:53.109 user 0m0.289s 00:04:53.109 sys 0m0.623s 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.109 18:31:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.109 ************************************ 00:04:53.109 END TEST custom_alloc 00:04:53.109 ************************************ 00:04:53.109 18:31:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:53.109 18:31:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.109 18:31:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.109 18:31:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.368 ************************************ 00:04:53.368 START TEST no_shrink_alloc 00:04:53.368 ************************************ 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.368 18:31:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:53.626 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4698508 kB' 'MemAvailable: 9487448 kB' 'Buffers: 36084 kB' 'Cached: 4880672 kB' 'SwapCached: 0 kB' 'Active: 1034524 kB' 'Inactive: 4013220 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141596 kB' 'Active(file): 1033472 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'AnonPages: 160204 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271268 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66120 kB' 'KernelStack: 4240 kB' 'PageTables: 3112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.564 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4698508 kB' 'MemAvailable: 9487448 kB' 'Buffers: 36084 kB' 'Cached: 4880672 kB' 'SwapCached: 0 kB' 'Active: 1034524 kB' 'Inactive: 4013220 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141596 kB' 'Active(file): 1033472 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'AnonPages: 159944 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271268 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66120 kB' 'KernelStack: 4240 kB' 'PageTables: 3112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.565 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.566 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4698284 kB' 'MemAvailable: 9487224 kB' 'Buffers: 36084 kB' 'Cached: 4880672 kB' 'SwapCached: 0 kB' 'Active: 1034516 kB' 'Inactive: 4012960 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141336 kB' 'Active(file): 1033472 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 159948 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271268 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66120 kB' 'KernelStack: 4240 kB' 'PageTables: 3088 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.567 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.568 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.569 nr_hugepages=1024 00:04:54.569 resv_hugepages=0 00:04:54.569 surplus_hugepages=0 00:04:54.569 anon_hugepages=0 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4699068 kB' 'MemAvailable: 9488008 kB' 'Buffers: 36084 kB' 'Cached: 4880672 kB' 'SwapCached: 0 kB' 'Active: 1034516 kB' 'Inactive: 4012756 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141132 kB' 'Active(file): 1033472 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'AnonPages: 159972 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271236 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66088 kB' 'KernelStack: 4276 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.569 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.570 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.830 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4698568 kB' 'MemUsed: 7544408 kB' 'SwapCached: 0 kB' 'Active: 1034516 kB' 'Inactive: 4012568 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 140944 kB' 'Active(file): 1033472 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 96 kB' 'Writeback: 0 kB' 'FilePages: 4916756 kB' 'Mapped: 67196 kB' 'AnonPages: 159752 kB' 'Shmem: 2596 kB' 'KernelStack: 4228 kB' 'PageTables: 3140 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 205148 kB' 'Slab: 271236 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 66088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.831 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.832 node0=1024 expecting 1024 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.832 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:55.090 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:55.090 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4699352 kB' 'MemAvailable: 9488300 kB' 'Buffers: 36084 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034536 kB' 'Inactive: 4013664 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142040 kB' 'Active(file): 1033480 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 112 kB' 'Writeback: 0 kB' 'AnonPages: 160912 kB' 'Mapped: 67200 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271080 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65932 kB' 'KernelStack: 4392 kB' 'PageTables: 3336 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.090 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.091 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4699604 kB' 'MemAvailable: 9488552 kB' 'Buffers: 36084 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034536 kB' 'Inactive: 4013272 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 141648 kB' 'Active(file): 1033480 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 120 kB' 'Writeback: 0 kB' 'AnonPages: 160256 kB' 'Mapped: 67160 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271080 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65932 kB' 'KernelStack: 4356 kB' 'PageTables: 3228 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.354 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.355 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4699864 kB' 'MemAvailable: 9488812 kB' 'Buffers: 36084 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034528 kB' 'Inactive: 4013048 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141424 kB' 'Active(file): 1033480 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 120 kB' 'Writeback: 0 kB' 'AnonPages: 160252 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271128 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65980 kB' 'KernelStack: 4268 kB' 'PageTables: 3336 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.356 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.357 nr_hugepages=1024 00:04:55.357 resv_hugepages=0 00:04:55.357 surplus_hugepages=0 00:04:55.357 anon_hugepages=0 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4700104 kB' 'MemAvailable: 9489052 kB' 'Buffers: 36084 kB' 'Cached: 4880680 kB' 'SwapCached: 0 kB' 'Active: 1034528 kB' 'Inactive: 4012844 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141220 kB' 'Active(file): 1033480 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 120 kB' 'Writeback: 0 kB' 'AnonPages: 160088 kB' 'Mapped: 67196 kB' 'Shmem: 2596 kB' 'KReclaimable: 205148 kB' 'Slab: 271136 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65988 kB' 'KernelStack: 4256 kB' 'PageTables: 3156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 490508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 147308 kB' 'DirectMap2M: 4046848 kB' 'DirectMap1G: 10485760 kB' 00:04:55.357 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.358 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4700356 kB' 'MemUsed: 7542620 kB' 'SwapCached: 0 kB' 'Active: 1034528 kB' 'Inactive: 4012756 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 141132 kB' 'Active(file): 1033480 kB' 'Inactive(file): 3871624 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 120 kB' 'Writeback: 0 kB' 'FilePages: 4916764 kB' 'Mapped: 67196 kB' 'AnonPages: 159992 kB' 'Shmem: 2596 kB' 'KernelStack: 4256 kB' 'PageTables: 3156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 205148 kB' 'Slab: 271136 kB' 'SReclaimable: 205148 kB' 'SUnreclaim: 65988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.359 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.360 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.361 node0=1024 expecting 1024 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.361 00:04:55.361 real 0m2.180s 00:04:55.361 user 0m0.594s 00:04:55.361 sys 0m1.606s 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.361 18:31:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.361 ************************************ 00:04:55.361 END TEST no_shrink_alloc 00:04:55.361 ************************************ 00:04:55.361 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:55.361 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:55.361 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:55.361 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.361 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.361 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.361 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.619 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:55.620 18:31:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:55.620 ************************************ 00:04:55.620 END TEST hugepages 00:04:55.620 ************************************ 00:04:55.620 00:04:55.620 real 0m8.928s 00:04:55.620 user 0m2.536s 00:04:55.620 sys 0m6.637s 00:04:55.620 18:31:55 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.620 18:31:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.620 18:31:55 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.620 18:31:55 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.620 18:31:55 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.620 18:31:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:55.620 ************************************ 00:04:55.620 START TEST driver 00:04:55.620 ************************************ 00:04:55.620 18:31:55 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:55.620 * Looking for test storage... 00:04:55.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:55.620 18:31:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:55.620 18:31:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.620 18:31:56 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.187 18:31:56 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:56.187 18:31:56 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.187 18:31:56 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.187 18:31:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:56.187 ************************************ 00:04:56.187 START TEST guess_driver 00:04:56.187 ************************************ 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:04:56.187 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:56.187 Looking for driver=uio_pci_generic 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.187 18:31:56 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.755 18:31:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:56.755 18:31:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:56.755 18:31:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.014 18:31:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.014 18:31:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:57.014 18:31:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.955 18:31:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:58.955 18:31:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:58.955 18:31:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.955 18:31:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.214 00:04:59.214 real 0m3.030s 00:04:59.214 user 0m0.529s 00:04:59.214 sys 0m2.505s 00:04:59.214 18:31:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.214 ************************************ 00:04:59.214 END TEST guess_driver 00:04:59.214 ************************************ 00:04:59.214 18:31:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.474 ************************************ 00:04:59.474 END TEST driver 00:04:59.474 ************************************ 00:04:59.474 00:04:59.474 real 0m3.815s 00:04:59.474 user 0m0.856s 00:04:59.474 sys 0m3.000s 00:04:59.474 18:31:59 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.474 18:31:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.474 18:31:59 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:59.474 18:31:59 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.474 18:31:59 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.474 18:31:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.474 ************************************ 00:04:59.474 START TEST devices 00:04:59.474 ************************************ 00:04:59.474 18:31:59 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:59.474 * Looking for test storage... 00:04:59.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.474 18:31:59 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:59.474 18:31:59 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:59.474 18:31:59 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.474 18:31:59 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.042 18:32:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:00.042 18:32:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:00.042 18:32:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:00.042 18:32:00 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:00.301 No valid GPT data, bailing 00:05:00.301 18:32:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.301 18:32:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:00.301 18:32:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:00.301 18:32:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:00.301 18:32:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:00.301 18:32:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:00.301 18:32:00 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:00.301 18:32:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:00.301 18:32:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:00.301 18:32:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:00.301 18:32:00 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:00.301 18:32:00 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:00.301 18:32:00 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:00.301 18:32:00 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.301 18:32:00 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.301 18:32:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:00.301 ************************************ 00:05:00.301 START TEST nvme_mount 00:05:00.301 ************************************ 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:00.301 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:00.302 18:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:01.237 Creating new GPT entries in memory. 00:05:01.237 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.237 other utilities. 00:05:01.237 18:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.237 18:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.237 18:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.237 18:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.237 18:32:01 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:02.174 Creating new GPT entries in memory. 00:05:02.174 The operation has completed successfully. 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 104586 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:02.174 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.433 18:32:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:02.693 18:32:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.597 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.597 18:32:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.597 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.597 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.597 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.597 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.597 18:32:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.856 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:04.856 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:04.856 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.856 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.856 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:04.856 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.116 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:05.116 18:32:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.495 18:32:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:06.755 18:32:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.661 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.661 00:05:08.661 real 0m8.335s 00:05:08.661 user 0m0.761s 00:05:08.661 sys 0m5.626s 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.661 18:32:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:08.661 ************************************ 00:05:08.661 END TEST nvme_mount 00:05:08.661 ************************************ 00:05:08.661 18:32:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:08.661 18:32:09 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.661 18:32:09 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.661 18:32:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.661 ************************************ 00:05:08.661 START TEST dm_mount 00:05:08.661 ************************************ 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:08.661 18:32:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:09.598 Creating new GPT entries in memory. 00:05:09.598 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:09.598 other utilities. 00:05:09.599 18:32:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:09.599 18:32:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.599 18:32:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.599 18:32:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.599 18:32:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:10.976 Creating new GPT entries in memory. 00:05:10.976 The operation has completed successfully. 00:05:10.976 18:32:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.976 18:32:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.976 18:32:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.976 18:32:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.976 18:32:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:11.914 The operation has completed successfully. 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 105103 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.914 18:32:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.174 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.174 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:12.174 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.174 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.174 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.174 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.433 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:12.433 18:32:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.405 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.665 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.665 18:32:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:16.572 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:16.572 00:05:16.572 real 0m7.762s 00:05:16.572 user 0m0.478s 00:05:16.572 sys 0m4.126s 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.572 18:32:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:16.572 ************************************ 00:05:16.572 END TEST dm_mount 00:05:16.572 ************************************ 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.572 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.572 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.572 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.572 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.572 18:32:16 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:16.572 00:05:16.572 real 0m17.098s 00:05:16.572 user 0m1.692s 00:05:16.572 sys 0m10.296s 00:05:16.572 18:32:16 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.572 ************************************ 00:05:16.572 END TEST devices 00:05:16.572 ************************************ 00:05:16.572 18:32:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:16.572 ************************************ 00:05:16.572 END TEST setup.sh 00:05:16.572 ************************************ 00:05:16.572 00:05:16.572 real 0m36.850s 00:05:16.572 user 0m6.947s 00:05:16.572 sys 0m25.283s 00:05:16.572 18:32:17 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.572 18:32:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:16.572 18:32:17 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:17.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:17.141 Hugepages 00:05:17.141 node hugesize free / total 00:05:17.141 node0 1048576kB 0 / 0 00:05:17.141 node0 2048kB 2048 / 2048 00:05:17.141 00:05:17.141 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.141 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:17.400 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:17.400 18:32:17 -- spdk/autotest.sh@130 -- # uname -s 00:05:17.400 18:32:17 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:17.400 18:32:17 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:17.400 18:32:17 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:17.968 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.873 18:32:20 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:20.809 18:32:21 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:20.809 18:32:21 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:20.809 18:32:21 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:20.809 18:32:21 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:20.809 18:32:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:20.809 18:32:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:20.809 18:32:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.809 18:32:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.809 18:32:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:20.809 18:32:21 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:20.809 18:32:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:20.809 18:32:21 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:21.377 Waiting for block devices as requested 00:05:21.377 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.377 18:32:21 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:21.377 18:32:21 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:21.636 18:32:21 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:21.636 18:32:21 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:21.636 18:32:21 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:21.636 18:32:21 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:21.637 18:32:21 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:21.637 18:32:21 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:21.637 18:32:21 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:21.637 18:32:21 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:21.637 18:32:21 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:21.637 18:32:21 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:21.637 18:32:21 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:21.637 18:32:21 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:21.637 18:32:21 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:21.637 18:32:21 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:21.637 18:32:21 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:21.637 18:32:21 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:21.637 18:32:21 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:21.637 18:32:21 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:21.637 18:32:21 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:21.637 18:32:21 -- common/autotest_common.sh@1557 -- # continue 00:05:21.637 18:32:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:21.637 18:32:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.637 18:32:21 -- common/autotest_common.sh@10 -- # set +x 00:05:21.637 18:32:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:21.637 18:32:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.637 18:32:22 -- common/autotest_common.sh@10 -- # set +x 00:05:21.637 18:32:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:22.205 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.111 18:32:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:24.111 18:32:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.111 18:32:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.111 18:32:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:24.111 18:32:24 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:24.111 18:32:24 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.111 18:32:24 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:24.111 18:32:24 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:24.111 18:32:24 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:24.111 18:32:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.111 18:32:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.111 18:32:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.111 18:32:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.111 18:32:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.111 18:32:24 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.111 18:32:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:24.111 18:32:24 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:24.111 18:32:24 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:24.111 18:32:24 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:24.111 18:32:24 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.111 18:32:24 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:24.111 18:32:24 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:24.111 18:32:24 -- common/autotest_common.sh@1593 -- # return 0 00:05:24.111 18:32:24 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:24.111 18:32:24 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.111 18:32:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.111 18:32:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.111 18:32:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.111 ************************************ 00:05:24.111 START TEST unittest 00:05:24.111 ************************************ 00:05:24.112 18:32:24 unittest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.112 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.112 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.112 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:24.112 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:24.112 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:24.112 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:24.112 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:24.112 ++ rpc_py=rpc_cmd 00:05:24.112 ++ set -e 00:05:24.112 ++ shopt -s nullglob 00:05:24.112 ++ shopt -s extglob 00:05:24.112 ++ shopt -s inherit_errexit 00:05:24.112 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:24.112 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:24.112 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:24.112 +++ CONFIG_WPDK_DIR= 00:05:24.112 +++ CONFIG_ASAN=y 00:05:24.112 +++ CONFIG_VBDEV_COMPRESS=n 00:05:24.112 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:24.112 +++ CONFIG_USDT=n 00:05:24.112 +++ CONFIG_CUSTOMOCF=n 00:05:24.112 +++ CONFIG_PREFIX=/usr/local 00:05:24.112 +++ CONFIG_RBD=n 00:05:24.112 +++ CONFIG_LIBDIR= 00:05:24.112 +++ CONFIG_IDXD=y 00:05:24.112 +++ CONFIG_NVME_CUSE=y 00:05:24.112 +++ CONFIG_SMA=n 00:05:24.112 +++ CONFIG_VTUNE=n 00:05:24.112 +++ CONFIG_TSAN=n 00:05:24.112 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:24.112 +++ CONFIG_VFIO_USER_DIR= 00:05:24.112 +++ CONFIG_PGO_CAPTURE=n 00:05:24.112 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:24.112 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:24.112 +++ CONFIG_LTO=n 00:05:24.112 +++ CONFIG_ISCSI_INITIATOR=y 00:05:24.112 +++ CONFIG_CET=n 00:05:24.112 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:24.112 +++ CONFIG_OCF_PATH= 00:05:24.112 +++ CONFIG_RDMA_SET_TOS=y 00:05:24.112 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:24.112 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:24.112 +++ CONFIG_UBLK=n 00:05:24.112 +++ CONFIG_ISAL_CRYPTO=y 00:05:24.112 +++ CONFIG_OPENSSL_PATH= 00:05:24.112 +++ CONFIG_OCF=n 00:05:24.112 +++ CONFIG_FUSE=n 00:05:24.112 +++ CONFIG_VTUNE_DIR= 00:05:24.112 +++ CONFIG_FUZZER_LIB= 00:05:24.112 +++ CONFIG_FUZZER=n 00:05:24.112 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:24.112 +++ CONFIG_CRYPTO=n 00:05:24.112 +++ CONFIG_PGO_USE=n 00:05:24.112 +++ CONFIG_VHOST=y 00:05:24.112 +++ CONFIG_DAOS=n 00:05:24.112 +++ CONFIG_DPDK_INC_DIR= 00:05:24.112 +++ CONFIG_DAOS_DIR= 00:05:24.112 +++ CONFIG_UNIT_TESTS=y 00:05:24.112 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:24.112 +++ CONFIG_VIRTIO=y 00:05:24.112 +++ CONFIG_DPDK_UADK=n 00:05:24.112 +++ CONFIG_COVERAGE=y 00:05:24.112 +++ CONFIG_RDMA=y 00:05:24.112 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:24.112 +++ CONFIG_URING_PATH= 00:05:24.112 +++ CONFIG_XNVME=n 00:05:24.112 +++ CONFIG_VFIO_USER=n 00:05:24.112 +++ CONFIG_ARCH=native 00:05:24.112 +++ CONFIG_HAVE_EVP_MAC=y 00:05:24.112 +++ CONFIG_URING_ZNS=n 00:05:24.112 +++ CONFIG_WERROR=y 00:05:24.112 +++ CONFIG_HAVE_LIBBSD=n 00:05:24.112 +++ CONFIG_UBSAN=y 00:05:24.112 +++ CONFIG_IPSEC_MB_DIR= 00:05:24.112 +++ CONFIG_GOLANG=n 00:05:24.112 +++ CONFIG_ISAL=y 00:05:24.112 +++ CONFIG_IDXD_KERNEL=n 00:05:24.112 +++ CONFIG_DPDK_LIB_DIR= 00:05:24.112 +++ CONFIG_RDMA_PROV=verbs 00:05:24.112 +++ CONFIG_APPS=y 00:05:24.112 +++ CONFIG_SHARED=n 00:05:24.112 +++ CONFIG_HAVE_KEYUTILS=y 00:05:24.112 +++ CONFIG_FC_PATH= 00:05:24.112 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:24.112 +++ CONFIG_FC=n 00:05:24.112 +++ CONFIG_AVAHI=n 00:05:24.112 +++ CONFIG_FIO_PLUGIN=y 00:05:24.112 +++ CONFIG_RAID5F=y 00:05:24.112 +++ CONFIG_EXAMPLES=y 00:05:24.112 +++ CONFIG_TESTS=y 00:05:24.112 +++ CONFIG_CRYPTO_MLX5=n 00:05:24.112 +++ CONFIG_MAX_LCORES=128 00:05:24.112 +++ CONFIG_IPSEC_MB=n 00:05:24.112 +++ CONFIG_PGO_DIR= 00:05:24.112 +++ CONFIG_DEBUG=y 00:05:24.112 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:24.112 +++ CONFIG_CROSS_PREFIX= 00:05:24.112 +++ CONFIG_URING=n 00:05:24.112 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:24.112 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:24.112 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:24.112 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:24.112 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:24.112 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:24.112 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:24.112 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:24.112 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:24.112 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:24.112 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:24.112 +++ VHOST_APP=("$_app_dir/vhost") 00:05:24.112 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:24.112 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:24.112 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:24.112 +++ [[ #ifndef SPDK_CONFIG_H 00:05:24.112 #define SPDK_CONFIG_H 00:05:24.112 #define SPDK_CONFIG_APPS 1 00:05:24.112 #define SPDK_CONFIG_ARCH native 00:05:24.112 #define SPDK_CONFIG_ASAN 1 00:05:24.112 #undef SPDK_CONFIG_AVAHI 00:05:24.112 #undef SPDK_CONFIG_CET 00:05:24.112 #define SPDK_CONFIG_COVERAGE 1 00:05:24.112 #define SPDK_CONFIG_CROSS_PREFIX 00:05:24.112 #undef SPDK_CONFIG_CRYPTO 00:05:24.112 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:24.112 #undef SPDK_CONFIG_CUSTOMOCF 00:05:24.112 #undef SPDK_CONFIG_DAOS 00:05:24.112 #define SPDK_CONFIG_DAOS_DIR 00:05:24.112 #define SPDK_CONFIG_DEBUG 1 00:05:24.112 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:24.112 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:24.112 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:24.112 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:24.112 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:24.112 #undef SPDK_CONFIG_DPDK_UADK 00:05:24.112 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:24.112 #define SPDK_CONFIG_EXAMPLES 1 00:05:24.112 #undef SPDK_CONFIG_FC 00:05:24.112 #define SPDK_CONFIG_FC_PATH 00:05:24.112 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:24.112 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:24.112 #undef SPDK_CONFIG_FUSE 00:05:24.112 #undef SPDK_CONFIG_FUZZER 00:05:24.112 #define SPDK_CONFIG_FUZZER_LIB 00:05:24.112 #undef SPDK_CONFIG_GOLANG 00:05:24.112 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:24.112 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:24.112 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:24.112 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:24.112 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:24.112 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:24.112 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:24.112 #define SPDK_CONFIG_IDXD 1 00:05:24.112 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:24.112 #undef SPDK_CONFIG_IPSEC_MB 00:05:24.112 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:24.112 #define SPDK_CONFIG_ISAL 1 00:05:24.112 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:24.112 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:24.112 #define SPDK_CONFIG_LIBDIR 00:05:24.112 #undef SPDK_CONFIG_LTO 00:05:24.112 #define SPDK_CONFIG_MAX_LCORES 128 00:05:24.112 #define SPDK_CONFIG_NVME_CUSE 1 00:05:24.112 #undef SPDK_CONFIG_OCF 00:05:24.112 #define SPDK_CONFIG_OCF_PATH 00:05:24.112 #define SPDK_CONFIG_OPENSSL_PATH 00:05:24.112 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:24.112 #define SPDK_CONFIG_PGO_DIR 00:05:24.112 #undef SPDK_CONFIG_PGO_USE 00:05:24.112 #define SPDK_CONFIG_PREFIX /usr/local 00:05:24.112 #define SPDK_CONFIG_RAID5F 1 00:05:24.112 #undef SPDK_CONFIG_RBD 00:05:24.112 #define SPDK_CONFIG_RDMA 1 00:05:24.112 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:24.112 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:24.112 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:24.112 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:24.112 #undef SPDK_CONFIG_SHARED 00:05:24.112 #undef SPDK_CONFIG_SMA 00:05:24.112 #define SPDK_CONFIG_TESTS 1 00:05:24.112 #undef SPDK_CONFIG_TSAN 00:05:24.112 #undef SPDK_CONFIG_UBLK 00:05:24.112 #define SPDK_CONFIG_UBSAN 1 00:05:24.112 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:24.112 #undef SPDK_CONFIG_URING 00:05:24.112 #define SPDK_CONFIG_URING_PATH 00:05:24.112 #undef SPDK_CONFIG_URING_ZNS 00:05:24.112 #undef SPDK_CONFIG_USDT 00:05:24.112 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:24.112 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:24.112 #undef SPDK_CONFIG_VFIO_USER 00:05:24.112 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:24.112 #define SPDK_CONFIG_VHOST 1 00:05:24.112 #define SPDK_CONFIG_VIRTIO 1 00:05:24.112 #undef SPDK_CONFIG_VTUNE 00:05:24.112 #define SPDK_CONFIG_VTUNE_DIR 00:05:24.112 #define SPDK_CONFIG_WERROR 1 00:05:24.113 #define SPDK_CONFIG_WPDK_DIR 00:05:24.113 #undef SPDK_CONFIG_XNVME 00:05:24.113 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:24.113 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:24.113 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.113 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:24.113 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.113 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.113 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:24.113 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:24.113 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:24.113 ++++ export PATH 00:05:24.113 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:24.113 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:24.113 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:24.113 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:24.113 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:24.113 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:24.113 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:24.113 +++ TEST_TAG=N/A 00:05:24.113 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:24.113 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:24.113 ++++ uname -s 00:05:24.113 +++ PM_OS=Linux 00:05:24.113 +++ MONITOR_RESOURCES_SUDO=() 00:05:24.113 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:24.113 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:24.113 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:24.113 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:24.113 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:24.113 +++ SUDO[0]= 00:05:24.113 +++ SUDO[1]='sudo -E' 00:05:24.113 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:24.113 +++ [[ Linux == FreeBSD ]] 00:05:24.113 +++ [[ Linux == Linux ]] 00:05:24.113 +++ [[ QEMU != QEMU ]] 00:05:24.113 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:24.113 ++ : 1 00:05:24.113 ++ export RUN_NIGHTLY 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_RUN_VALGRIND 00:05:24.113 ++ : 1 00:05:24.113 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:24.113 ++ : 1 00:05:24.113 ++ export SPDK_TEST_UNITTEST 00:05:24.113 ++ : 00:05:24.113 ++ export SPDK_TEST_AUTOBUILD 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_RELEASE_BUILD 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_ISAL 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_ISCSI 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:24.113 ++ : 1 00:05:24.113 ++ export SPDK_TEST_NVME 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_NVME_PMR 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_NVME_BP 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_NVME_CLI 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_NVME_CUSE 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_NVME_FDP 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_NVMF 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_VFIOUSER 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_FUZZER 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_FUZZER_SHORT 00:05:24.113 ++ : rdma 00:05:24.113 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_RBD 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_VHOST 00:05:24.113 ++ : 1 00:05:24.113 ++ export SPDK_TEST_BLOCKDEV 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_IOAT 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_BLOBFS 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_VHOST_INIT 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_LVOL 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:24.113 ++ : 1 00:05:24.113 ++ export SPDK_RUN_ASAN 00:05:24.113 ++ : 1 00:05:24.113 ++ export SPDK_RUN_UBSAN 00:05:24.113 ++ : 00:05:24.113 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_RUN_NON_ROOT 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_CRYPTO 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_FTL 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_OCF 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_VMD 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_OPAL 00:05:24.113 ++ : 00:05:24.113 ++ export SPDK_TEST_NATIVE_DPDK 00:05:24.113 ++ : true 00:05:24.113 ++ export SPDK_AUTOTEST_X 00:05:24.113 ++ : 1 00:05:24.113 ++ export SPDK_TEST_RAID5 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_URING 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_USDT 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_USE_IGB_UIO 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_SCHEDULER 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_SCANBUILD 00:05:24.113 ++ : 00:05:24.113 ++ export SPDK_TEST_NVMF_NICS 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_SMA 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_DAOS 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_XNVME 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_ACCEL 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_ACCEL_DSA 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_ACCEL_IAA 00:05:24.113 ++ : 00:05:24.113 ++ export SPDK_TEST_FUZZER_TARGET 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_TEST_NVMF_MDNS 00:05:24.113 ++ : 0 00:05:24.113 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:24.113 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:24.113 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:24.113 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:24.113 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:24.113 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.113 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.113 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.113 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:24.113 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:24.113 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:24.113 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:24.113 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:24.113 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:24.113 ++ PYTHONDONTWRITEBYTECODE=1 00:05:24.113 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:24.113 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:24.113 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:24.113 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:24.113 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:24.113 ++ rm -rf /var/tmp/asan_suppression_file 00:05:24.113 ++ cat 00:05:24.113 ++ echo leak:libfuse3.so 00:05:24.113 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:24.113 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:24.113 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:24.113 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:24.113 ++ '[' -z /var/spdk/dependencies ']' 00:05:24.113 ++ export DEPENDENCY_DIR 00:05:24.113 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:24.113 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:24.113 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:24.113 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:24.113 ++ export QEMU_BIN= 00:05:24.113 ++ QEMU_BIN= 00:05:24.113 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:24.113 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:24.114 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:24.114 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:24.114 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:24.114 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:24.114 ++ '[' 0 -eq 0 ']' 00:05:24.114 ++ export valgrind= 00:05:24.114 ++ valgrind= 00:05:24.114 +++ uname -s 00:05:24.114 ++ '[' Linux = Linux ']' 00:05:24.114 ++ HUGEMEM=4096 00:05:24.114 ++ export CLEAR_HUGE=yes 00:05:24.114 ++ CLEAR_HUGE=yes 00:05:24.114 ++ [[ 0 -eq 1 ]] 00:05:24.114 ++ [[ 0 -eq 1 ]] 00:05:24.114 ++ MAKE=make 00:05:24.114 +++ nproc 00:05:24.114 ++ MAKEFLAGS=-j10 00:05:24.114 ++ export HUGEMEM=4096 00:05:24.114 ++ HUGEMEM=4096 00:05:24.114 ++ NO_HUGE=() 00:05:24.114 ++ TEST_MODE= 00:05:24.114 ++ [[ -z '' ]] 00:05:24.114 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:24.114 ++ exec 00:05:24.114 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:24.114 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:24.114 ++ set_test_storage 2147483648 00:05:24.114 ++ [[ -v testdir ]] 00:05:24.114 ++ local requested_size=2147483648 00:05:24.114 ++ local mount target_dir 00:05:24.114 ++ local -A mounts fss sizes avails uses 00:05:24.114 ++ local source fs size avail mount use 00:05:24.114 ++ local storage_fallback storage_candidates 00:05:24.114 +++ mktemp -udt spdk.XXXXXX 00:05:24.114 ++ storage_fallback=/tmp/spdk.2vRZpv 00:05:24.114 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:24.114 ++ [[ -n '' ]] 00:05:24.114 ++ [[ -n '' ]] 00:05:24.114 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.2vRZpv/tests/unit /tmp/spdk.2vRZpv 00:05:24.114 ++ requested_size=2214592512 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 +++ df -T 00:05:24.114 +++ grep -v Filesystem 00:05:24.114 ++ mounts["$mount"]=tmpfs 00:05:24.114 ++ fss["$mount"]=tmpfs 00:05:24.114 ++ avails["$mount"]=1252601856 00:05:24.114 ++ sizes["$mount"]=1253683200 00:05:24.114 ++ uses["$mount"]=1081344 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 ++ mounts["$mount"]=/dev/vda1 00:05:24.114 ++ fss["$mount"]=ext4 00:05:24.114 ++ avails["$mount"]=10110906368 00:05:24.114 ++ sizes["$mount"]=20616794112 00:05:24.114 ++ uses["$mount"]=10489110528 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 ++ mounts["$mount"]=tmpfs 00:05:24.114 ++ fss["$mount"]=tmpfs 00:05:24.114 ++ avails["$mount"]=6268403712 00:05:24.114 ++ sizes["$mount"]=6268403712 00:05:24.114 ++ uses["$mount"]=0 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 ++ mounts["$mount"]=tmpfs 00:05:24.114 ++ fss["$mount"]=tmpfs 00:05:24.114 ++ avails["$mount"]=5242880 00:05:24.114 ++ sizes["$mount"]=5242880 00:05:24.114 ++ uses["$mount"]=0 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 ++ mounts["$mount"]=/dev/vda15 00:05:24.114 ++ fss["$mount"]=vfat 00:05:24.114 ++ avails["$mount"]=103061504 00:05:24.114 ++ sizes["$mount"]=109395968 00:05:24.114 ++ uses["$mount"]=6334464 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 ++ mounts["$mount"]=tmpfs 00:05:24.114 ++ fss["$mount"]=tmpfs 00:05:24.114 ++ avails["$mount"]=1253675008 00:05:24.114 ++ sizes["$mount"]=1253679104 00:05:24.114 ++ uses["$mount"]=4096 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:05:24.114 ++ fss["$mount"]=fuse.sshfs 00:05:24.114 ++ avails["$mount"]=96499015680 00:05:24.114 ++ sizes["$mount"]=105088212992 00:05:24.114 ++ uses["$mount"]=3203764224 00:05:24.114 ++ read -r source fs size use avail _ mount 00:05:24.114 ++ printf '* Looking for test storage...\n' 00:05:24.114 * Looking for test storage... 00:05:24.114 ++ local target_space new_size 00:05:24.114 ++ for target_dir in "${storage_candidates[@]}" 00:05:24.114 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.114 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:24.373 ++ mount=/ 00:05:24.374 ++ target_space=10110906368 00:05:24.374 ++ (( target_space == 0 || target_space < requested_size )) 00:05:24.374 ++ (( target_space >= requested_size )) 00:05:24.374 ++ [[ ext4 == tmpfs ]] 00:05:24.374 ++ [[ ext4 == ramfs ]] 00:05:24.374 ++ [[ / == / ]] 00:05:24.374 ++ new_size=12703703040 00:05:24.374 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:24.374 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:24.374 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:24.374 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:24.374 ++ return 0 00:05:24.374 ++ set -o errtrace 00:05:24.374 ++ shopt -s extdebug 00:05:24.374 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:24.374 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@1687 -- # true 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@29 -- # exec 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:24.374 18:32:24 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:05:24.374 --rc lcov_branch_coverage=1 00:05:24.374 --rc lcov_function_coverage=1 00:05:24.374 --rc genhtml_branch_coverage=1 00:05:24.374 --rc genhtml_function_coverage=1 00:05:24.374 --rc genhtml_legend=1 00:05:24.374 --rc geninfo_all_blocks=1 00:05:24.374 ' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:05:24.374 --rc lcov_branch_coverage=1 00:05:24.374 --rc lcov_function_coverage=1 00:05:24.374 --rc genhtml_branch_coverage=1 00:05:24.374 --rc genhtml_function_coverage=1 00:05:24.374 --rc genhtml_legend=1 00:05:24.374 --rc geninfo_all_blocks=1 00:05:24.374 ' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:05:24.374 --rc lcov_branch_coverage=1 00:05:24.374 --rc lcov_function_coverage=1 00:05:24.374 --rc genhtml_branch_coverage=1 00:05:24.374 --rc genhtml_function_coverage=1 00:05:24.374 --rc genhtml_legend=1 00:05:24.374 --rc geninfo_all_blocks=1 00:05:24.374 --no-external' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:05:24.374 --rc lcov_branch_coverage=1 00:05:24.374 --rc lcov_function_coverage=1 00:05:24.374 --rc genhtml_branch_coverage=1 00:05:24.374 --rc genhtml_function_coverage=1 00:05:24.374 --rc genhtml_legend=1 00:05:24.374 --rc geninfo_all_blocks=1 00:05:24.374 --no-external' 00:05:24.374 18:32:24 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:29.699 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:29.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:08.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:08.479 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:08.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:08.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:09.418 18:33:09 unittest -- unit/unittest.sh@208 -- # uname -m 00:06:09.418 18:33:09 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:06:09.418 18:33:09 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:09.418 ************************************ 00:06:09.418 START TEST unittest_pci_event 00:06:09.418 ************************************ 00:06:09.418 18:33:09 unittest.unittest_pci_event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:09.418 00:06:09.418 00:06:09.418 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.418 http://cunit.sourceforge.net/ 00:06:09.418 00:06:09.418 00:06:09.418 Suite: pci_event 00:06:09.418 Test: test_pci_parse_event ...[2024-07-25 18:33:09.731759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:09.418 [2024-07-25 18:33:09.733586] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:09.418 passed 00:06:09.418 00:06:09.418 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.418 suites 1 1 n/a 0 0 00:06:09.418 tests 1 1 1 0 0 00:06:09.418 asserts 15 15 15 0 n/a 00:06:09.418 00:06:09.418 Elapsed time = 0.001 seconds 00:06:09.418 00:06:09.418 real 0m0.050s 00:06:09.418 user 0m0.026s 00:06:09.418 sys 0m0.017s 00:06:09.418 18:33:09 unittest.unittest_pci_event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.418 ************************************ 00:06:09.418 18:33:09 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:06:09.418 END TEST unittest_pci_event 00:06:09.418 ************************************ 00:06:09.418 18:33:09 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:09.418 ************************************ 00:06:09.418 START TEST unittest_include 00:06:09.418 ************************************ 00:06:09.418 18:33:09 unittest.unittest_include -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:09.418 00:06:09.418 00:06:09.418 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.418 http://cunit.sourceforge.net/ 00:06:09.418 00:06:09.418 00:06:09.418 Suite: histogram 00:06:09.418 Test: histogram_test ...passed 00:06:09.418 Test: histogram_merge ...passed 00:06:09.418 00:06:09.418 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.418 suites 1 1 n/a 0 0 00:06:09.418 tests 2 2 2 0 0 00:06:09.418 asserts 50 50 50 0 n/a 00:06:09.418 00:06:09.418 Elapsed time = 0.006 seconds 00:06:09.418 00:06:09.418 real 0m0.042s 00:06:09.418 user 0m0.026s 00:06:09.418 sys 0m0.016s 00:06:09.418 18:33:09 unittest.unittest_include -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.418 18:33:09 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:06:09.418 ************************************ 00:06:09.418 END TEST unittest_include 00:06:09.418 ************************************ 00:06:09.418 18:33:09 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.418 18:33:09 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.419 18:33:09 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:09.419 ************************************ 00:06:09.419 START TEST unittest_bdev 00:06:09.419 ************************************ 00:06:09.419 18:33:09 unittest.unittest_bdev -- common/autotest_common.sh@1125 -- # unittest_bdev 00:06:09.419 18:33:09 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:09.419 00:06:09.419 00:06:09.419 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.419 http://cunit.sourceforge.net/ 00:06:09.419 00:06:09.419 00:06:09.419 Suite: bdev 00:06:09.419 Test: bytes_to_blocks_test ...passed 00:06:09.678 Test: num_blocks_test ...passed 00:06:09.678 Test: io_valid_test ...passed 00:06:09.678 Test: open_write_test ...[2024-07-25 18:33:10.057506] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:09.678 [2024-07-25 18:33:10.057903] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:09.678 [2024-07-25 18:33:10.058048] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:09.678 passed 00:06:09.678 Test: claim_test ...passed 00:06:09.678 Test: alias_add_del_test ...[2024-07-25 18:33:10.178574] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:09.678 [2024-07-25 18:33:10.178712] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:09.678 [2024-07-25 18:33:10.178782] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:09.678 passed 00:06:09.678 Test: get_device_stat_test ...passed 00:06:09.678 Test: bdev_io_types_test ...passed 00:06:09.937 Test: bdev_io_wait_test ...passed 00:06:09.937 Test: bdev_io_spans_split_test ...passed 00:06:09.937 Test: bdev_io_boundary_split_test ...passed 00:06:09.937 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-25 18:33:10.355235] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:09.937 passed 00:06:09.937 Test: bdev_io_mix_split_test ...passed 00:06:09.937 Test: bdev_io_split_with_io_wait ...passed 00:06:10.196 Test: bdev_io_write_unit_split_test ...[2024-07-25 18:33:10.527602] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:10.196 [2024-07-25 18:33:10.527737] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:10.196 [2024-07-25 18:33:10.527800] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:10.196 [2024-07-25 18:33:10.527871] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2765:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:10.196 passed 00:06:10.196 Test: bdev_io_alignment_with_boundary ...passed 00:06:10.196 Test: bdev_io_alignment ...passed 00:06:10.196 Test: bdev_histograms ...passed 00:06:10.196 Test: bdev_write_zeroes ...passed 00:06:10.455 Test: bdev_compare_and_write ...passed 00:06:10.455 Test: bdev_compare ...passed 00:06:10.455 Test: bdev_compare_emulated ...passed 00:06:10.455 Test: bdev_zcopy_write ...passed 00:06:10.714 Test: bdev_zcopy_read ...passed 00:06:10.715 Test: bdev_open_while_hotremove ...passed 00:06:10.715 Test: bdev_close_while_hotremove ...passed 00:06:10.715 Test: bdev_open_ext_test ...[2024-07-25 18:33:11.049190] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:10.715 passed 00:06:10.715 Test: bdev_open_ext_unregister ...[2024-07-25 18:33:11.049469] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:10.715 passed 00:06:10.715 Test: bdev_set_io_timeout ...passed 00:06:10.715 Test: bdev_set_qd_sampling ...passed 00:06:10.715 Test: lba_range_overlap ...passed 00:06:10.715 Test: lock_lba_range_check_ranges ...passed 00:06:10.715 Test: lock_lba_range_with_io_outstanding ...passed 00:06:10.715 Test: lock_lba_range_overlapped ...passed 00:06:10.715 Test: bdev_quiesce ...[2024-07-25 18:33:11.283867] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10186:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:10.974 passed 00:06:10.974 Test: bdev_io_abort ...passed 00:06:10.974 Test: bdev_unmap ...passed 00:06:10.974 Test: bdev_write_zeroes_split_test ...passed 00:06:10.974 Test: bdev_set_options_test ...passed 00:06:10.974 Test: bdev_get_memory_domains ...passed 00:06:10.974 Test: bdev_io_ext ...[2024-07-25 18:33:11.449640] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:10.974 passed 00:06:10.974 Test: bdev_io_ext_no_opts ...passed 00:06:11.233 Test: bdev_io_ext_invalid_opts ...passed 00:06:11.233 Test: bdev_io_ext_split ...passed 00:06:11.233 Test: bdev_io_ext_bounce_buffer ...passed 00:06:11.233 Test: bdev_register_uuid_alias ...[2024-07-25 18:33:11.685023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 306cd5d5-143e-444f-afab-cc43aaf26941 already exists 00:06:11.233 [2024-07-25 18:33:11.685128] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:306cd5d5-143e-444f-afab-cc43aaf26941 alias for bdev bdev0 00:06:11.233 passed 00:06:11.233 Test: bdev_unregister_by_name ...[2024-07-25 18:33:11.709483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8007:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:11.233 [2024-07-25 18:33:11.709554] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8015:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:11.233 passed 00:06:11.233 Test: for_each_bdev_test ...passed 00:06:11.233 Test: bdev_seek_test ...passed 00:06:11.233 Test: bdev_copy ...passed 00:06:11.493 Test: bdev_copy_split_test ...passed 00:06:11.493 Test: examine_locks ...passed 00:06:11.493 Test: claim_v2_rwo ...[2024-07-25 18:33:11.840476] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.840585] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.840606] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.840669] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.840700] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:11.493 passed 00:06:11.493 Test: claim_v2_rom ...[2024-07-25 18:33:11.840752] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8736:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:11.493 [2024-07-25 18:33:11.840897] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.840961] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.840988] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841014] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841052] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8779:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:11.493 [2024-07-25 18:33:11.841103] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:11.493 passed 00:06:11.493 Test: claim_v2_rwm ...[2024-07-25 18:33:11.841193] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:11.493 [2024-07-25 18:33:11.841242] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841267] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841292] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841324] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841358] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8829:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:11.493 passed 00:06:11.493 Test: claim_v2_existing_writer ...[2024-07-25 18:33:11.841412] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8809:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:11.493 passed 00:06:11.493 Test: claim_v2_existing_v1 ...passed 00:06:11.493 Test: claim_v1_existing_v2 ...[2024-07-25 18:33:11.841534] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:11.493 [2024-07-25 18:33:11.841562] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8774:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:11.493 [2024-07-25 18:33:11.841659] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841687] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841705] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841813] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:11.493 passed 00:06:11.493 Test: examine_claimed ...[2024-07-25 18:33:11.841854] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:11.493 [2024-07-25 18:33:11.841886] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:11.493 passed 00:06:11.493 00:06:11.493 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.493 suites 1 1 n/a 0 0 00:06:11.493 tests 59 59 59 0 0 00:06:11.493 asserts 4599 4599 4599 0 n/a 00:06:11.493 00:06:11.493 Elapsed time = 1.881 seconds 00:06:11.493 [2024-07-25 18:33:11.842144] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:11.493 18:33:11 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:11.493 00:06:11.493 00:06:11.493 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.493 http://cunit.sourceforge.net/ 00:06:11.493 00:06:11.493 00:06:11.493 Suite: nvme 00:06:11.493 Test: test_create_ctrlr ...passed 00:06:11.493 Test: test_reset_ctrlr ...[2024-07-25 18:33:11.913013] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.493 passed 00:06:11.493 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:11.493 Test: test_failover_ctrlr ...passed 00:06:11.493 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-25 18:33:11.915957] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.493 [2024-07-25 18:33:11.916210] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.493 [2024-07-25 18:33:11.916447] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.493 passed 00:06:11.493 Test: test_pending_reset ...[2024-07-25 18:33:11.918296] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.493 [2024-07-25 18:33:11.918574] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.493 passed 00:06:11.493 Test: test_attach_ctrlr ...[2024-07-25 18:33:11.919977] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:11.493 passed 00:06:11.493 Test: test_aer_cb ...passed 00:06:11.493 Test: test_submit_nvme_cmd ...passed 00:06:11.493 Test: test_add_remove_trid ...passed 00:06:11.493 Test: test_abort ...[2024-07-25 18:33:11.924240] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7480:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:11.493 passed 00:06:11.493 Test: test_get_io_qpair ...passed 00:06:11.493 Test: test_bdev_unregister ...passed 00:06:11.493 Test: test_compare_ns ...passed 00:06:11.493 Test: test_init_ana_log_page ...passed 00:06:11.493 Test: test_get_memory_domains ...passed 00:06:11.493 Test: test_reconnect_qpair ...[2024-07-25 18:33:11.927498] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.493 passed 00:06:11.493 Test: test_create_bdev_ctrlr ...[2024-07-25 18:33:11.928131] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5407:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:11.493 passed 00:06:11.493 Test: test_add_multi_ns_to_bdev ...[2024-07-25 18:33:11.929678] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4574:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:11.493 passed 00:06:11.493 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:11.493 Test: test_admin_path ...passed 00:06:11.493 Test: test_reset_bdev_ctrlr ...passed 00:06:11.493 Test: test_find_io_path ...passed 00:06:11.493 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:11.493 Test: test_retry_io_for_io_path_error ...passed 00:06:11.493 Test: test_retry_io_count ...passed 00:06:11.493 Test: test_concurrent_read_ana_log_page ...passed 00:06:11.493 Test: test_retry_io_for_ana_error ...passed 00:06:11.493 Test: test_check_io_error_resiliency_params ...[2024-07-25 18:33:11.937705] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:11.493 [2024-07-25 18:33:11.937813] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6108:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:11.493 [2024-07-25 18:33:11.937858] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6117:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:11.493 [2024-07-25 18:33:11.937894] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6120:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:11.493 [2024-07-25 18:33:11.937929] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:11.493 [2024-07-25 18:33:11.937973] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:11.494 [2024-07-25 18:33:11.938002] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:11.494 passed 00:06:11.494 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-25 18:33:11.938071] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6127:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:11.494 [2024-07-25 18:33:11.938110] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6124:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:11.494 passed 00:06:11.494 Test: test_reconnect_ctrlr ...[2024-07-25 18:33:11.939016] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.939176] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.939420] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.939605] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.939794] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 passed 00:06:11.494 Test: test_retry_failover_ctrlr ...[2024-07-25 18:33:11.940181] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 passed 00:06:11.494 Test: test_fail_path ...[2024-07-25 18:33:11.940764] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.940918] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.941106] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.941206] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.941368] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 passed 00:06:11.494 Test: test_nvme_ns_cmp ...passed 00:06:11.494 Test: test_ana_transition ...passed 00:06:11.494 Test: test_set_preferred_path ...passed 00:06:11.494 Test: test_find_next_io_path ...passed 00:06:11.494 Test: test_find_io_path_min_qd ...passed 00:06:11.494 Test: test_disable_auto_failback ...[2024-07-25 18:33:11.943295] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 passed 00:06:11.494 Test: test_set_multipath_policy ...passed 00:06:11.494 Test: test_uuid_generation ...passed 00:06:11.494 Test: test_retry_io_to_same_path ...passed 00:06:11.494 Test: test_race_between_reset_and_disconnected ...passed 00:06:11.494 Test: test_ctrlr_op_rpc ...passed 00:06:11.494 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:11.494 Test: test_disable_enable_ctrlr ...[2024-07-25 18:33:11.947357] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 [2024-07-25 18:33:11.947555] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:11.494 passed 00:06:11.494 Test: test_delete_ctrlr_done ...passed 00:06:11.494 Test: test_ns_remove_during_reset ...passed 00:06:11.494 Test: test_io_path_is_current ...passed 00:06:11.494 00:06:11.494 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.494 suites 1 1 n/a 0 0 00:06:11.494 tests 49 49 49 0 0 00:06:11.494 asserts 3578 3578 3578 0 n/a 00:06:11.494 00:06:11.494 Elapsed time = 0.037 seconds 00:06:11.494 18:33:11 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:11.494 00:06:11.494 00:06:11.494 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.494 http://cunit.sourceforge.net/ 00:06:11.494 00:06:11.494 Test Options 00:06:11.494 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:11.494 00:06:11.494 Suite: raid 00:06:11.494 Test: test_create_raid ...passed 00:06:11.494 Test: test_create_raid_superblock ...passed 00:06:11.494 Test: test_delete_raid ...passed 00:06:11.494 Test: test_create_raid_invalid_args ...[2024-07-25 18:33:12.006375] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1508:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:11.494 [2024-07-25 18:33:12.006902] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1502:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:11.494 [2024-07-25 18:33:12.007645] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1492:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:11.494 [2024-07-25 18:33:12.007937] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3307:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:11.494 [2024-07-25 18:33:12.008050] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3487:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:11.494 [2024-07-25 18:33:12.009179] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3307:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:11.494 [2024-07-25 18:33:12.009234] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3487:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:11.494 passed 00:06:11.494 Test: test_delete_raid_invalid_args ...passed 00:06:11.494 Test: test_io_channel ...passed 00:06:11.494 Test: test_reset_io ...passed 00:06:11.494 Test: test_multi_raid ...passed 00:06:11.494 Test: test_io_type_supported ...passed 00:06:11.494 Test: test_raid_json_dump_info ...passed 00:06:11.494 Test: test_context_size ...passed 00:06:11.494 Test: test_raid_level_conversions ...passed 00:06:11.494 Test: test_raid_io_split ...passed 00:06:11.494 Test: test_raid_process ...passed 00:06:11.494 Test: test_raid_process_with_qos ...passed 00:06:11.494 00:06:11.494 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.494 suites 1 1 n/a 0 0 00:06:11.494 tests 15 15 15 0 0 00:06:11.494 asserts 6602 6602 6602 0 n/a 00:06:11.494 00:06:11.494 Elapsed time = 0.034 seconds 00:06:11.494 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:11.754 00:06:11.754 00:06:11.754 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.754 http://cunit.sourceforge.net/ 00:06:11.754 00:06:11.754 00:06:11.754 Suite: raid_sb 00:06:11.754 Test: test_raid_bdev_write_superblock ...passed 00:06:11.754 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:11.754 Test: test_raid_bdev_parse_superblock ...[2024-07-25 18:33:12.080259] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:11.754 passed 00:06:11.754 Suite: raid_sb_md 00:06:11.754 Test: test_raid_bdev_write_superblock ...passed 00:06:11.754 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:11.754 Test: test_raid_bdev_parse_superblock ...[2024-07-25 18:33:12.080976] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:11.754 passed 00:06:11.754 Suite: raid_sb_md_interleaved 00:06:11.754 Test: test_raid_bdev_write_superblock ...passed 00:06:11.754 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:11.754 Test: test_raid_bdev_parse_superblock ...[2024-07-25 18:33:12.081430] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:11.754 passed 00:06:11.754 00:06:11.754 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.754 suites 3 3 n/a 0 0 00:06:11.754 tests 9 9 9 0 0 00:06:11.754 asserts 139 139 139 0 n/a 00:06:11.754 00:06:11.754 Elapsed time = 0.002 seconds 00:06:11.754 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:11.754 00:06:11.754 00:06:11.754 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.754 http://cunit.sourceforge.net/ 00:06:11.754 00:06:11.754 00:06:11.754 Suite: concat 00:06:11.754 Test: test_concat_start ...passed 00:06:11.754 Test: test_concat_rw ...passed 00:06:11.754 Test: test_concat_null_payload ...passed 00:06:11.754 00:06:11.754 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.754 suites 1 1 n/a 0 0 00:06:11.754 tests 3 3 3 0 0 00:06:11.754 asserts 8460 8460 8460 0 n/a 00:06:11.754 00:06:11.754 Elapsed time = 0.008 seconds 00:06:11.754 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:06:11.754 00:06:11.754 00:06:11.754 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.754 http://cunit.sourceforge.net/ 00:06:11.754 00:06:11.754 00:06:11.754 Suite: raid0 00:06:11.754 Test: test_write_io ...passed 00:06:11.754 Test: test_read_io ...passed 00:06:11.754 Test: test_unmap_io ...passed 00:06:11.754 Test: test_io_failure ...passed 00:06:11.754 Suite: raid0_dif 00:06:11.754 Test: test_write_io ...passed 00:06:11.754 Test: test_read_io ...passed 00:06:12.014 Test: test_unmap_io ...passed 00:06:12.014 Test: test_io_failure ...passed 00:06:12.014 00:06:12.014 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.014 suites 2 2 n/a 0 0 00:06:12.014 tests 8 8 8 0 0 00:06:12.014 asserts 368291 368291 368291 0 n/a 00:06:12.014 00:06:12.014 Elapsed time = 0.163 seconds 00:06:12.014 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:12.014 00:06:12.014 00:06:12.014 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.014 http://cunit.sourceforge.net/ 00:06:12.014 00:06:12.014 00:06:12.014 Suite: raid1 00:06:12.014 Test: test_raid1_start ...passed 00:06:12.014 Test: test_raid1_read_balancing ...passed 00:06:12.014 Test: test_raid1_write_error ...passed 00:06:12.014 Test: test_raid1_read_error ...passed 00:06:12.014 00:06:12.014 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.014 suites 1 1 n/a 0 0 00:06:12.014 tests 4 4 4 0 0 00:06:12.014 asserts 4374 4374 4374 0 n/a 00:06:12.014 00:06:12.014 Elapsed time = 0.006 seconds 00:06:12.014 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:12.014 00:06:12.014 00:06:12.014 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.014 http://cunit.sourceforge.net/ 00:06:12.014 00:06:12.014 00:06:12.014 Suite: zone 00:06:12.014 Test: test_zone_get_operation ...passed 00:06:12.014 Test: test_bdev_zone_get_info ...passed 00:06:12.014 Test: test_bdev_zone_management ...passed 00:06:12.014 Test: test_bdev_zone_append ...passed 00:06:12.014 Test: test_bdev_zone_append_with_md ...passed 00:06:12.014 Test: test_bdev_zone_appendv ...passed 00:06:12.014 Test: test_bdev_zone_appendv_with_md ...passed 00:06:12.014 Test: test_bdev_io_get_append_location ...passed 00:06:12.014 00:06:12.014 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.014 suites 1 1 n/a 0 0 00:06:12.014 tests 8 8 8 0 0 00:06:12.014 asserts 94 94 94 0 n/a 00:06:12.014 00:06:12.014 Elapsed time = 0.001 seconds 00:06:12.014 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:12.014 00:06:12.014 00:06:12.014 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.014 http://cunit.sourceforge.net/ 00:06:12.014 00:06:12.014 00:06:12.014 Suite: gpt_parse 00:06:12.014 Test: test_parse_mbr_and_primary ...[2024-07-25 18:33:12.502242] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:12.014 [2024-07-25 18:33:12.502595] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:12.014 [2024-07-25 18:33:12.502657] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:12.014 [2024-07-25 18:33:12.502754] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:12.014 [2024-07-25 18:33:12.502808] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:12.014 [2024-07-25 18:33:12.502932] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:12.014 passed 00:06:12.014 Test: test_parse_secondary ...[2024-07-25 18:33:12.503707] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:12.014 [2024-07-25 18:33:12.503774] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:12.014 [2024-07-25 18:33:12.503830] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:12.014 [2024-07-25 18:33:12.503879] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:12.014 passed 00:06:12.014 Test: test_check_mbr ...[2024-07-25 18:33:12.504628] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:12.014 passed 00:06:12.014 Test: test_read_header ...[2024-07-25 18:33:12.504695] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:12.014 [2024-07-25 18:33:12.504767] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:12.014 [2024-07-25 18:33:12.504889] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:12.014 [2024-07-25 18:33:12.504989] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:12.014 passed 00:06:12.014 Test: test_read_partitions ...[2024-07-25 18:33:12.505055] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:12.014 [2024-07-25 18:33:12.505111] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:12.014 [2024-07-25 18:33:12.505163] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:12.014 [2024-07-25 18:33:12.505239] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:12.014 [2024-07-25 18:33:12.505305] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:12.014 [2024-07-25 18:33:12.505358] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:12.014 [2024-07-25 18:33:12.505401] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:12.014 [2024-07-25 18:33:12.505803] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:12.014 passed 00:06:12.014 00:06:12.014 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.014 suites 1 1 n/a 0 0 00:06:12.014 tests 5 5 5 0 0 00:06:12.014 asserts 33 33 33 0 n/a 00:06:12.014 00:06:12.014 Elapsed time = 0.004 seconds 00:06:12.014 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:12.014 00:06:12.014 00:06:12.014 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.014 http://cunit.sourceforge.net/ 00:06:12.014 00:06:12.014 00:06:12.014 Suite: bdev_part 00:06:12.015 Test: part_test ...[2024-07-25 18:33:12.555510] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 54128c59-e3cd-56a8-9df4-5c3c4915da81 already exists 00:06:12.015 [2024-07-25 18:33:12.555895] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:54128c59-e3cd-56a8-9df4-5c3c4915da81 alias for bdev test1 00:06:12.015 passed 00:06:12.015 Test: part_free_test ...passed 00:06:12.275 Test: part_get_io_channel_test ...passed 00:06:12.275 Test: part_construct_ext ...passed 00:06:12.275 00:06:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.275 suites 1 1 n/a 0 0 00:06:12.275 tests 4 4 4 0 0 00:06:12.275 asserts 48 48 48 0 n/a 00:06:12.275 00:06:12.275 Elapsed time = 0.074 seconds 00:06:12.275 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:12.275 00:06:12.275 00:06:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.275 http://cunit.sourceforge.net/ 00:06:12.275 00:06:12.275 00:06:12.275 Suite: scsi_nvme_suite 00:06:12.275 Test: scsi_nvme_translate_test ...passed 00:06:12.275 00:06:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.275 suites 1 1 n/a 0 0 00:06:12.275 tests 1 1 1 0 0 00:06:12.275 asserts 104 104 104 0 n/a 00:06:12.275 00:06:12.275 Elapsed time = 0.000 seconds 00:06:12.275 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:12.275 00:06:12.275 00:06:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.275 http://cunit.sourceforge.net/ 00:06:12.275 00:06:12.275 00:06:12.275 Suite: lvol 00:06:12.275 Test: ut_lvs_init ...[2024-07-25 18:33:12.732398] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:12.275 [2024-07-25 18:33:12.732898] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:12.275 passed 00:06:12.275 Test: ut_lvol_init ...passed 00:06:12.275 Test: ut_lvol_snapshot ...passed 00:06:12.275 Test: ut_lvol_clone ...passed 00:06:12.275 Test: ut_lvs_destroy ...passed 00:06:12.275 Test: ut_lvs_unload ...passed 00:06:12.275 Test: ut_lvol_resize ...[2024-07-25 18:33:12.734857] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:12.275 passed 00:06:12.275 Test: ut_lvol_set_read_only ...passed 00:06:12.275 Test: ut_lvol_hotremove ...passed 00:06:12.275 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:12.275 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:12.275 Test: ut_lvol_read_write ...passed 00:06:12.276 Test: ut_vbdev_lvol_submit_request ...passed 00:06:12.276 Test: ut_lvol_examine_config ...passed 00:06:12.276 Test: ut_lvol_examine_disk ...[2024-07-25 18:33:12.735609] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:12.276 passed 00:06:12.276 Test: ut_lvol_rename ...[2024-07-25 18:33:12.736857] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:12.276 [2024-07-25 18:33:12.736998] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:12.276 passed 00:06:12.276 Test: ut_bdev_finish ...passed 00:06:12.276 Test: ut_lvs_rename ...passed 00:06:12.276 Test: ut_lvol_seek ...passed 00:06:12.276 Test: ut_esnap_dev_create ...[2024-07-25 18:33:12.737903] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:12.276 [2024-07-25 18:33:12.737996] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:12.276 [2024-07-25 18:33:12.738042] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:12.276 passed 00:06:12.276 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-25 18:33:12.738195] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:12.276 [2024-07-25 18:33:12.738239] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:12.276 passed 00:06:12.276 Test: ut_lvol_shallow_copy ...[2024-07-25 18:33:12.738698] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:12.276 [2024-07-25 18:33:12.738753] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:06:12.276 passed 00:06:12.276 Test: ut_lvol_set_external_parent ...passed 00:06:12.276 00:06:12.276 [2024-07-25 18:33:12.738929] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:12.276 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.276 suites 1 1 n/a 0 0 00:06:12.276 tests 23 23 23 0 0 00:06:12.276 asserts 770 770 770 0 n/a 00:06:12.276 00:06:12.276 Elapsed time = 0.007 seconds 00:06:12.276 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:12.276 00:06:12.276 00:06:12.276 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.276 http://cunit.sourceforge.net/ 00:06:12.276 00:06:12.276 00:06:12.276 Suite: zone_block 00:06:12.276 Test: test_zone_block_create ...passed 00:06:12.276 Test: test_zone_block_create_invalid ...[2024-07-25 18:33:12.810041] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:12.276 [2024-07-25 18:33:12.810466] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-25 18:33:12.810742] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:12.276 [2024-07-25 18:33:12.810838] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-25 18:33:12.811077] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:12.276 [2024-07-25 18:33:12.811141] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-25 18:33:12.811254] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:12.276 [2024-07-25 18:33:12.811324] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:12.276 Test: test_get_zone_info ...[2024-07-25 18:33:12.812014] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.812123] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.812197] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 passed 00:06:12.276 Test: test_supported_io_types ...passed 00:06:12.276 Test: test_reset_zone ...[2024-07-25 18:33:12.813326] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.813420] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 passed 00:06:12.276 Test: test_open_zone ...[2024-07-25 18:33:12.814047] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.814850] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.814960] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 passed 00:06:12.276 Test: test_zone_write ...[2024-07-25 18:33:12.815646] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:12.276 [2024-07-25 18:33:12.815727] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.815812] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:12.276 [2024-07-25 18:33:12.815873] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.823731] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:12.276 [2024-07-25 18:33:12.823804] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.823886] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:12.276 [2024-07-25 18:33:12.823930] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.831589] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:12.276 [2024-07-25 18:33:12.831672] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 passed 00:06:12.276 Test: test_zone_read ...[2024-07-25 18:33:12.832273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:12.276 [2024-07-25 18:33:12.832336] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.832442] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:12.276 [2024-07-25 18:33:12.832491] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.833116] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:12.276 [2024-07-25 18:33:12.833171] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 passed 00:06:12.276 Test: test_close_zone ...[2024-07-25 18:33:12.833700] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.833829] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.834111] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.834202] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 passed 00:06:12.276 Test: test_finish_zone ...[2024-07-25 18:33:12.834958] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.835052] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 passed 00:06:12.276 Test: test_append_zone ...[2024-07-25 18:33:12.835531] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:12.276 [2024-07-25 18:33:12.835596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.276 [2024-07-25 18:33:12.835690] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:12.276 [2024-07-25 18:33:12.835734] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.536 [2024-07-25 18:33:12.851273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:12.536 [2024-07-25 18:33:12.851344] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:12.536 passed 00:06:12.536 00:06:12.536 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.536 suites 1 1 n/a 0 0 00:06:12.536 tests 11 11 11 0 0 00:06:12.536 asserts 3437 3437 3437 0 n/a 00:06:12.536 00:06:12.536 Elapsed time = 0.043 seconds 00:06:12.536 18:33:12 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:12.536 00:06:12.536 00:06:12.536 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.536 http://cunit.sourceforge.net/ 00:06:12.536 00:06:12.536 00:06:12.536 Suite: bdev 00:06:12.536 Test: basic ...[2024-07-25 18:33:13.000214] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d5818f7b41): Operation not permitted (rc=-1) 00:06:12.536 [2024-07-25 18:33:13.000650] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55d5818f7b00): Operation not permitted (rc=-1) 00:06:12.536 [2024-07-25 18:33:13.000714] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d5818f7b41): Operation not permitted (rc=-1) 00:06:12.536 passed 00:06:12.795 Test: unregister_and_close ...passed 00:06:12.795 Test: unregister_and_close_different_threads ...passed 00:06:12.795 Test: basic_qos ...passed 00:06:12.795 Test: put_channel_during_reset ...passed 00:06:12.795 Test: aborted_reset ...passed 00:06:13.053 Test: aborted_reset_no_outstanding_io ...passed 00:06:13.053 Test: io_during_reset ...passed 00:06:13.053 Test: reset_completions ...passed 00:06:13.053 Test: io_during_qos_queue ...passed 00:06:13.053 Test: io_during_qos_reset ...passed 00:06:13.053 Test: enomem ...passed 00:06:13.311 Test: enomem_multi_bdev ...passed 00:06:13.311 Test: enomem_multi_bdev_unregister ...passed 00:06:13.311 Test: enomem_multi_io_target ...passed 00:06:13.311 Test: qos_dynamic_enable ...passed 00:06:13.311 Test: bdev_histograms_mt ...passed 00:06:13.570 Test: bdev_set_io_timeout_mt ...[2024-07-25 18:33:13.911563] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:13.570 passed 00:06:13.570 Test: lock_lba_range_then_submit_io ...[2024-07-25 18:33:13.932181] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x55d5818f7ac0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:13.570 passed 00:06:13.570 Test: unregister_during_reset ...passed 00:06:13.570 Test: event_notify_and_close ...passed 00:06:13.570 Test: unregister_and_qos_poller ...passed 00:06:13.570 Suite: bdev_wrong_thread 00:06:13.570 Test: spdk_bdev_register_wt ...[2024-07-25 18:33:14.101243] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8535:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:06:13.570 passed 00:06:13.570 Test: spdk_bdev_examine_wt ...[2024-07-25 18:33:14.101596] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:06:13.570 passed 00:06:13.570 00:06:13.570 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.570 suites 2 2 n/a 0 0 00:06:13.570 tests 24 24 24 0 0 00:06:13.571 asserts 621 621 621 0 n/a 00:06:13.571 00:06:13.571 Elapsed time = 1.141 seconds 00:06:13.571 00:06:13.571 real 0m4.205s 00:06:13.571 user 0m2.049s 00:06:13.571 sys 0m2.162s 00:06:13.571 18:33:14 unittest.unittest_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.571 18:33:14 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:13.571 ************************************ 00:06:13.571 END TEST unittest_bdev 00:06:13.571 ************************************ 00:06:13.836 18:33:14 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.836 18:33:14 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.836 18:33:14 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.836 18:33:14 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.836 18:33:14 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:13.836 18:33:14 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.836 18:33:14 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.836 18:33:14 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:13.836 ************************************ 00:06:13.836 START TEST unittest_bdev_raid5f 00:06:13.836 ************************************ 00:06:13.836 18:33:14 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:13.836 00:06:13.836 00:06:13.836 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.836 http://cunit.sourceforge.net/ 00:06:13.836 00:06:13.836 00:06:13.836 Suite: raid5f 00:06:13.836 Test: test_raid5f_start ...passed 00:06:14.439 Test: test_raid5f_submit_read_request ...passed 00:06:14.698 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:18.887 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:40.820 Test: test_raid5f_chunk_write_error ...passed 00:06:53.026 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:54.934 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:33.699 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:33.699 00:07:33.699 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.699 suites 1 1 n/a 0 0 00:07:33.699 tests 8 8 8 0 0 00:07:33.699 asserts 518158 518158 518158 0 n/a 00:07:33.699 00:07:33.699 Elapsed time = 74.324 seconds 00:07:33.699 00:07:33.699 real 1m14.441s 00:07:33.699 user 1m9.065s 00:07:33.699 sys 0m5.368s 00:07:33.699 18:34:28 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.699 18:34:28 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:07:33.699 ************************************ 00:07:33.699 END TEST unittest_bdev_raid5f 00:07:33.699 ************************************ 00:07:33.699 18:34:28 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:07:33.699 18:34:28 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.699 18:34:28 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.699 18:34:28 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:33.699 ************************************ 00:07:33.699 START TEST unittest_blob_blobfs 00:07:33.699 ************************************ 00:07:33.699 18:34:28 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1125 -- # unittest_blob 00:07:33.699 18:34:28 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:33.699 18:34:28 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:33.699 00:07:33.699 00:07:33.699 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.699 http://cunit.sourceforge.net/ 00:07:33.699 00:07:33.699 00:07:33.699 Suite: blob_nocopy_noextent 00:07:33.699 Test: blob_init ...[2024-07-25 18:34:28.772460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:33.699 passed 00:07:33.699 Test: blob_thin_provision ...passed 00:07:33.699 Test: blob_read_only ...passed 00:07:33.699 Test: bs_load ...[2024-07-25 18:34:28.892725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:33.699 passed 00:07:33.699 Test: bs_load_custom_cluster_size ...passed 00:07:33.699 Test: bs_load_after_failed_grow ...passed 00:07:33.699 Test: bs_cluster_sz ...[2024-07-25 18:34:28.936761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:33.699 [2024-07-25 18:34:28.937225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:33.699 [2024-07-25 18:34:28.937382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:33.699 passed 00:07:33.699 Test: bs_resize_md ...passed 00:07:33.699 Test: bs_destroy ...passed 00:07:33.699 Test: bs_type ...passed 00:07:33.699 Test: bs_super_block ...passed 00:07:33.699 Test: bs_test_recover_cluster_count ...passed 00:07:33.699 Test: bs_grow_live ...passed 00:07:33.699 Test: bs_grow_live_no_space ...passed 00:07:33.699 Test: bs_test_grow ...passed 00:07:33.699 Test: blob_serialize_test ...passed 00:07:33.699 Test: super_block_crc ...passed 00:07:33.699 Test: blob_thin_prov_write_count_io ...passed 00:07:33.699 Test: blob_thin_prov_unmap_cluster ...passed 00:07:33.699 Test: bs_load_iter_test ...passed 00:07:33.699 Test: blob_relations ...[2024-07-25 18:34:29.230549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.699 [2024-07-25 18:34:29.230679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.231621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.699 [2024-07-25 18:34:29.231689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 passed 00:07:33.699 Test: blob_relations2 ...[2024-07-25 18:34:29.252889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.699 [2024-07-25 18:34:29.252975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.253027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.699 [2024-07-25 18:34:29.253053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.254422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.699 [2024-07-25 18:34:29.254479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.254854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.699 [2024-07-25 18:34:29.254916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 passed 00:07:33.699 Test: blob_relations3 ...passed 00:07:33.699 Test: blobstore_clean_power_failure ...passed 00:07:33.699 Test: blob_delete_snapshot_power_failure ...[2024-07-25 18:34:29.513945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:33.699 [2024-07-25 18:34:29.533683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:33.699 [2024-07-25 18:34:29.533790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:33.699 [2024-07-25 18:34:29.533849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.553434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:33.699 [2024-07-25 18:34:29.553528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:33.699 [2024-07-25 18:34:29.553576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:33.699 [2024-07-25 18:34:29.553634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.573190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:33.699 [2024-07-25 18:34:29.573325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.592951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:33.699 [2024-07-25 18:34:29.593084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 [2024-07-25 18:34:29.613016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:33.699 [2024-07-25 18:34:29.613107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.699 passed 00:07:33.699 Test: blob_create_snapshot_power_failure ...[2024-07-25 18:34:29.672235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:33.699 [2024-07-25 18:34:29.711169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:33.699 [2024-07-25 18:34:29.731179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:33.699 passed 00:07:33.699 Test: blob_io_unit ...passed 00:07:33.699 Test: blob_io_unit_compatibility ...passed 00:07:33.699 Test: blob_ext_md_pages ...passed 00:07:33.699 Test: blob_esnap_io_4096_4096 ...passed 00:07:33.699 Test: blob_esnap_io_512_512 ...passed 00:07:33.699 Test: blob_esnap_io_4096_512 ...passed 00:07:33.699 Test: blob_esnap_io_512_4096 ...passed 00:07:33.699 Test: blob_esnap_clone_resize ...passed 00:07:33.699 Suite: blob_bs_nocopy_noextent 00:07:33.699 Test: blob_open ...passed 00:07:33.700 Test: blob_create ...[2024-07-25 18:34:30.152599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:33.700 passed 00:07:33.700 Test: blob_create_loop ...passed 00:07:33.700 Test: blob_create_fail ...[2024-07-25 18:34:30.287060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:33.700 passed 00:07:33.700 Test: blob_create_internal ...passed 00:07:33.700 Test: blob_create_zero_extent ...passed 00:07:33.700 Test: blob_snapshot ...passed 00:07:33.700 Test: blob_clone ...passed 00:07:33.700 Test: blob_inflate ...[2024-07-25 18:34:30.583304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:33.700 passed 00:07:33.700 Test: blob_delete ...passed 00:07:33.700 Test: blob_resize_test ...[2024-07-25 18:34:30.694293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:33.700 passed 00:07:33.700 Test: blob_resize_thin_test ...passed 00:07:33.700 Test: channel_ops ...passed 00:07:33.700 Test: blob_super ...passed 00:07:33.700 Test: blob_rw_verify_iov ...passed 00:07:33.700 Test: blob_unmap ...passed 00:07:33.700 Test: blob_iter ...passed 00:07:33.700 Test: blob_parse_md ...passed 00:07:33.700 Test: bs_load_pending_removal ...passed 00:07:33.700 Test: bs_unload ...[2024-07-25 18:34:31.195437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:33.700 passed 00:07:33.700 Test: bs_usable_clusters ...passed 00:07:33.700 Test: blob_crc ...[2024-07-25 18:34:31.305766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:33.700 [2024-07-25 18:34:31.305930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:33.700 passed 00:07:33.700 Test: blob_flags ...passed 00:07:33.700 Test: bs_version ...passed 00:07:33.700 Test: blob_set_xattrs_test ...[2024-07-25 18:34:31.472304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:33.700 [2024-07-25 18:34:31.472422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:33.700 passed 00:07:33.700 Test: blob_thin_prov_alloc ...passed 00:07:33.700 Test: blob_insert_cluster_msg_test ...passed 00:07:33.700 Test: blob_thin_prov_rw ...passed 00:07:33.700 Test: blob_thin_prov_rle ...passed 00:07:33.700 Test: blob_thin_prov_rw_iov ...passed 00:07:33.700 Test: blob_snapshot_rw ...passed 00:07:33.700 Test: blob_snapshot_rw_iov ...passed 00:07:33.700 Test: blob_inflate_rw ...passed 00:07:33.700 Test: blob_snapshot_freeze_io ...passed 00:07:33.700 Test: blob_operation_split_rw ...passed 00:07:33.700 Test: blob_operation_split_rw_iov ...passed 00:07:33.700 Test: blob_simultaneous_operations ...[2024-07-25 18:34:32.613812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:33.700 [2024-07-25 18:34:32.613924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.700 [2024-07-25 18:34:32.615297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:33.700 [2024-07-25 18:34:32.615375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.700 [2024-07-25 18:34:32.629315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:33.700 [2024-07-25 18:34:32.629376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.700 [2024-07-25 18:34:32.629504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:33.700 [2024-07-25 18:34:32.629537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.700 passed 00:07:33.700 Test: blob_persist_test ...passed 00:07:33.700 Test: blob_decouple_snapshot ...passed 00:07:33.700 Test: blob_seek_io_unit ...passed 00:07:33.700 Test: blob_nested_freezes ...passed 00:07:33.700 Test: blob_clone_resize ...passed 00:07:33.700 Test: blob_shallow_copy ...[2024-07-25 18:34:33.052595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:33.700 [2024-07-25 18:34:33.052986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:33.700 [2024-07-25 18:34:33.053279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:33.700 passed 00:07:33.700 Suite: blob_blob_nocopy_noextent 00:07:33.700 Test: blob_write ...passed 00:07:33.700 Test: blob_read ...passed 00:07:33.700 Test: blob_rw_verify ...passed 00:07:33.700 Test: blob_rw_verify_iov_nomem ...passed 00:07:33.700 Test: blob_rw_iov_read_only ...passed 00:07:33.700 Test: blob_xattr ...passed 00:07:33.700 Test: blob_dirty_shutdown ...passed 00:07:33.700 Test: blob_is_degraded ...passed 00:07:33.700 Suite: blob_esnap_bs_nocopy_noextent 00:07:33.700 Test: blob_esnap_create ...passed 00:07:33.700 Test: blob_esnap_thread_add_remove ...passed 00:07:33.700 Test: blob_esnap_clone_snapshot ...passed 00:07:33.700 Test: blob_esnap_clone_inflate ...passed 00:07:33.700 Test: blob_esnap_clone_decouple ...passed 00:07:33.700 Test: blob_esnap_clone_reload ...passed 00:07:33.700 Test: blob_esnap_hotplug ...passed 00:07:33.700 Test: blob_set_parent ...[2024-07-25 18:34:33.960499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:33.700 [2024-07-25 18:34:33.960596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:33.700 [2024-07-25 18:34:33.960749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:33.700 [2024-07-25 18:34:33.960797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:33.700 [2024-07-25 18:34:33.961276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:33.700 passed 00:07:33.700 Test: blob_set_external_parent ...[2024-07-25 18:34:34.017325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:33.700 [2024-07-25 18:34:34.017433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:33.700 [2024-07-25 18:34:34.017479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:33.700 [2024-07-25 18:34:34.017924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:33.700 passed 00:07:33.700 Suite: blob_nocopy_extent 00:07:33.700 Test: blob_init ...[2024-07-25 18:34:34.036905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:33.700 passed 00:07:33.700 Test: blob_thin_provision ...passed 00:07:33.700 Test: blob_read_only ...passed 00:07:33.700 Test: bs_load ...[2024-07-25 18:34:34.112365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:33.700 passed 00:07:33.700 Test: bs_load_custom_cluster_size ...passed 00:07:33.700 Test: bs_load_after_failed_grow ...passed 00:07:33.700 Test: bs_cluster_sz ...[2024-07-25 18:34:34.152616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:33.700 [2024-07-25 18:34:34.152934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:33.700 [2024-07-25 18:34:34.152984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:33.700 passed 00:07:33.700 Test: bs_resize_md ...passed 00:07:33.700 Test: bs_destroy ...passed 00:07:33.700 Test: bs_type ...passed 00:07:33.700 Test: bs_super_block ...passed 00:07:33.700 Test: bs_test_recover_cluster_count ...passed 00:07:33.960 Test: bs_grow_live ...passed 00:07:33.960 Test: bs_grow_live_no_space ...passed 00:07:33.960 Test: bs_test_grow ...passed 00:07:33.960 Test: blob_serialize_test ...passed 00:07:33.960 Test: super_block_crc ...passed 00:07:33.960 Test: blob_thin_prov_write_count_io ...passed 00:07:33.960 Test: blob_thin_prov_unmap_cluster ...passed 00:07:33.960 Test: bs_load_iter_test ...passed 00:07:33.960 Test: blob_relations ...[2024-07-25 18:34:34.429673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.960 [2024-07-25 18:34:34.429825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.960 [2024-07-25 18:34:34.430727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.960 [2024-07-25 18:34:34.430780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.960 passed 00:07:33.960 Test: blob_relations2 ...[2024-07-25 18:34:34.451833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.960 [2024-07-25 18:34:34.451927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.960 [2024-07-25 18:34:34.451970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.960 [2024-07-25 18:34:34.452000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.960 [2024-07-25 18:34:34.453307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.960 [2024-07-25 18:34:34.453379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.960 [2024-07-25 18:34:34.453752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:33.960 [2024-07-25 18:34:34.453809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.960 passed 00:07:33.960 Test: blob_relations3 ...passed 00:07:34.220 Test: blobstore_clean_power_failure ...passed 00:07:34.220 Test: blob_delete_snapshot_power_failure ...[2024-07-25 18:34:34.713306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:34.220 [2024-07-25 18:34:34.733298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:34.220 [2024-07-25 18:34:34.753349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:34.220 [2024-07-25 18:34:34.753454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:34.220 [2024-07-25 18:34:34.753494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.220 [2024-07-25 18:34:34.773428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:34.220 [2024-07-25 18:34:34.773531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:34.220 [2024-07-25 18:34:34.773577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:34.220 [2024-07-25 18:34:34.773616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.479 [2024-07-25 18:34:34.793432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:34.479 [2024-07-25 18:34:34.793539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:34.479 [2024-07-25 18:34:34.793567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:34.479 [2024-07-25 18:34:34.793608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.479 [2024-07-25 18:34:34.813411] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:34.479 [2024-07-25 18:34:34.813537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.479 [2024-07-25 18:34:34.833271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:34.479 [2024-07-25 18:34:34.833408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.479 [2024-07-25 18:34:34.853469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:34.479 [2024-07-25 18:34:34.853590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:34.479 passed 00:07:34.479 Test: blob_create_snapshot_power_failure ...[2024-07-25 18:34:34.913774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:34.479 [2024-07-25 18:34:34.933380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:34.479 [2024-07-25 18:34:34.972088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:34.479 [2024-07-25 18:34:34.991834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:34.738 passed 00:07:34.738 Test: blob_io_unit ...passed 00:07:34.738 Test: blob_io_unit_compatibility ...passed 00:07:34.738 Test: blob_ext_md_pages ...passed 00:07:34.738 Test: blob_esnap_io_4096_4096 ...passed 00:07:34.738 Test: blob_esnap_io_512_512 ...passed 00:07:34.738 Test: blob_esnap_io_4096_512 ...passed 00:07:34.738 Test: blob_esnap_io_512_4096 ...passed 00:07:34.997 Test: blob_esnap_clone_resize ...passed 00:07:34.997 Suite: blob_bs_nocopy_extent 00:07:34.997 Test: blob_open ...passed 00:07:34.997 Test: blob_create ...[2024-07-25 18:34:35.412762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:34.997 passed 00:07:34.997 Test: blob_create_loop ...passed 00:07:34.997 Test: blob_create_fail ...[2024-07-25 18:34:35.553470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:35.256 passed 00:07:35.256 Test: blob_create_internal ...passed 00:07:35.256 Test: blob_create_zero_extent ...passed 00:07:35.256 Test: blob_snapshot ...passed 00:07:35.256 Test: blob_clone ...passed 00:07:35.514 Test: blob_inflate ...[2024-07-25 18:34:35.853909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:35.514 passed 00:07:35.514 Test: blob_delete ...passed 00:07:35.514 Test: blob_resize_test ...[2024-07-25 18:34:35.963383] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:35.514 passed 00:07:35.514 Test: blob_resize_thin_test ...passed 00:07:35.773 Test: channel_ops ...passed 00:07:35.773 Test: blob_super ...passed 00:07:35.773 Test: blob_rw_verify_iov ...passed 00:07:35.773 Test: blob_unmap ...passed 00:07:35.773 Test: blob_iter ...passed 00:07:36.031 Test: blob_parse_md ...passed 00:07:36.031 Test: bs_load_pending_removal ...passed 00:07:36.031 Test: bs_unload ...[2024-07-25 18:34:36.464510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:36.031 passed 00:07:36.031 Test: bs_usable_clusters ...passed 00:07:36.031 Test: blob_crc ...[2024-07-25 18:34:36.575149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:36.031 [2024-07-25 18:34:36.575278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:36.031 passed 00:07:36.290 Test: blob_flags ...passed 00:07:36.290 Test: bs_version ...passed 00:07:36.290 Test: blob_set_xattrs_test ...[2024-07-25 18:34:36.741820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:36.290 [2024-07-25 18:34:36.741936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:36.290 passed 00:07:36.548 Test: blob_thin_prov_alloc ...passed 00:07:36.548 Test: blob_insert_cluster_msg_test ...passed 00:07:36.548 Test: blob_thin_prov_rw ...passed 00:07:36.548 Test: blob_thin_prov_rle ...passed 00:07:36.807 Test: blob_thin_prov_rw_iov ...passed 00:07:36.807 Test: blob_snapshot_rw ...passed 00:07:36.807 Test: blob_snapshot_rw_iov ...passed 00:07:37.064 Test: blob_inflate_rw ...passed 00:07:37.064 Test: blob_snapshot_freeze_io ...passed 00:07:37.322 Test: blob_operation_split_rw ...passed 00:07:37.322 Test: blob_operation_split_rw_iov ...passed 00:07:37.322 Test: blob_simultaneous_operations ...[2024-07-25 18:34:37.869338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.322 [2024-07-25 18:34:37.869424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.322 [2024-07-25 18:34:37.870805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.322 [2024-07-25 18:34:37.870863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.322 [2024-07-25 18:34:37.884826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.322 [2024-07-25 18:34:37.884879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.322 [2024-07-25 18:34:37.885005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:37.322 [2024-07-25 18:34:37.885022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:37.581 passed 00:07:37.581 Test: blob_persist_test ...passed 00:07:37.581 Test: blob_decouple_snapshot ...passed 00:07:37.581 Test: blob_seek_io_unit ...passed 00:07:37.840 Test: blob_nested_freezes ...passed 00:07:37.840 Test: blob_clone_resize ...passed 00:07:37.840 Test: blob_shallow_copy ...[2024-07-25 18:34:38.306139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:37.840 [2024-07-25 18:34:38.306511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:37.840 [2024-07-25 18:34:38.306771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:37.840 passed 00:07:37.840 Suite: blob_blob_nocopy_extent 00:07:37.840 Test: blob_write ...passed 00:07:38.099 Test: blob_read ...passed 00:07:38.099 Test: blob_rw_verify ...passed 00:07:38.099 Test: blob_rw_verify_iov_nomem ...passed 00:07:38.099 Test: blob_rw_iov_read_only ...passed 00:07:38.099 Test: blob_xattr ...passed 00:07:38.359 Test: blob_dirty_shutdown ...passed 00:07:38.359 Test: blob_is_degraded ...passed 00:07:38.359 Suite: blob_esnap_bs_nocopy_extent 00:07:38.359 Test: blob_esnap_create ...passed 00:07:38.359 Test: blob_esnap_thread_add_remove ...passed 00:07:38.617 Test: blob_esnap_clone_snapshot ...passed 00:07:38.617 Test: blob_esnap_clone_inflate ...passed 00:07:38.617 Test: blob_esnap_clone_decouple ...passed 00:07:38.617 Test: blob_esnap_clone_reload ...passed 00:07:38.617 Test: blob_esnap_hotplug ...passed 00:07:38.876 Test: blob_set_parent ...[2024-07-25 18:34:39.207305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:38.876 [2024-07-25 18:34:39.207436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:38.876 [2024-07-25 18:34:39.207543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:38.876 [2024-07-25 18:34:39.207577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:38.876 [2024-07-25 18:34:39.207982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:38.876 passed 00:07:38.876 Test: blob_set_external_parent ...[2024-07-25 18:34:39.263270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:38.876 [2024-07-25 18:34:39.263358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:38.876 [2024-07-25 18:34:39.263400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:38.876 [2024-07-25 18:34:39.263742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:38.876 passed 00:07:38.876 Suite: blob_copy_noextent 00:07:38.876 Test: blob_init ...[2024-07-25 18:34:39.282645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:38.876 passed 00:07:38.876 Test: blob_thin_provision ...passed 00:07:38.876 Test: blob_read_only ...passed 00:07:38.876 Test: bs_load ...[2024-07-25 18:34:39.357720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:38.876 passed 00:07:38.876 Test: bs_load_custom_cluster_size ...passed 00:07:38.876 Test: bs_load_after_failed_grow ...passed 00:07:38.876 Test: bs_cluster_sz ...[2024-07-25 18:34:39.396848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:38.876 [2024-07-25 18:34:39.397072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:38.876 [2024-07-25 18:34:39.397110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:38.876 passed 00:07:38.876 Test: bs_resize_md ...passed 00:07:39.136 Test: bs_destroy ...passed 00:07:39.136 Test: bs_type ...passed 00:07:39.136 Test: bs_super_block ...passed 00:07:39.136 Test: bs_test_recover_cluster_count ...passed 00:07:39.136 Test: bs_grow_live ...passed 00:07:39.136 Test: bs_grow_live_no_space ...passed 00:07:39.136 Test: bs_test_grow ...passed 00:07:39.136 Test: blob_serialize_test ...passed 00:07:39.136 Test: super_block_crc ...passed 00:07:39.136 Test: blob_thin_prov_write_count_io ...passed 00:07:39.136 Test: blob_thin_prov_unmap_cluster ...passed 00:07:39.136 Test: bs_load_iter_test ...passed 00:07:39.136 Test: blob_relations ...[2024-07-25 18:34:39.676268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:39.136 [2024-07-25 18:34:39.676368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.136 [2024-07-25 18:34:39.676913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:39.136 [2024-07-25 18:34:39.676946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.136 passed 00:07:39.136 Test: blob_relations2 ...[2024-07-25 18:34:39.697110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:39.136 [2024-07-25 18:34:39.697186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.136 [2024-07-25 18:34:39.697230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:39.136 [2024-07-25 18:34:39.697245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.136 [2024-07-25 18:34:39.698126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:39.136 [2024-07-25 18:34:39.698175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.136 [2024-07-25 18:34:39.698437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:39.136 [2024-07-25 18:34:39.698471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.136 passed 00:07:39.395 Test: blob_relations3 ...passed 00:07:39.395 Test: blobstore_clean_power_failure ...passed 00:07:39.395 Test: blob_delete_snapshot_power_failure ...[2024-07-25 18:34:39.959395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:39.654 [2024-07-25 18:34:39.978705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:39.654 [2024-07-25 18:34:39.978787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:39.654 [2024-07-25 18:34:39.978830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.654 [2024-07-25 18:34:39.998124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:39.654 [2024-07-25 18:34:39.998208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:39.654 [2024-07-25 18:34:39.998232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:39.654 [2024-07-25 18:34:39.998267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.654 [2024-07-25 18:34:40.017858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:39.654 [2024-07-25 18:34:40.017975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.654 [2024-07-25 18:34:40.037389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:39.654 [2024-07-25 18:34:40.037542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.654 [2024-07-25 18:34:40.056900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:39.654 [2024-07-25 18:34:40.056997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.654 passed 00:07:39.654 Test: blob_create_snapshot_power_failure ...[2024-07-25 18:34:40.114375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:39.654 [2024-07-25 18:34:40.152001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:39.654 [2024-07-25 18:34:40.171106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:39.913 passed 00:07:39.913 Test: blob_io_unit ...passed 00:07:39.913 Test: blob_io_unit_compatibility ...passed 00:07:39.913 Test: blob_ext_md_pages ...passed 00:07:39.913 Test: blob_esnap_io_4096_4096 ...passed 00:07:39.914 Test: blob_esnap_io_512_512 ...passed 00:07:39.914 Test: blob_esnap_io_4096_512 ...passed 00:07:39.914 Test: blob_esnap_io_512_4096 ...passed 00:07:40.187 Test: blob_esnap_clone_resize ...passed 00:07:40.187 Suite: blob_bs_copy_noextent 00:07:40.187 Test: blob_open ...passed 00:07:40.187 Test: blob_create ...[2024-07-25 18:34:40.587879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:40.187 passed 00:07:40.187 Test: blob_create_loop ...passed 00:07:40.187 Test: blob_create_fail ...[2024-07-25 18:34:40.720261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:40.187 passed 00:07:40.474 Test: blob_create_internal ...passed 00:07:40.474 Test: blob_create_zero_extent ...passed 00:07:40.474 Test: blob_snapshot ...passed 00:07:40.474 Test: blob_clone ...passed 00:07:40.474 Test: blob_inflate ...[2024-07-25 18:34:41.003625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:40.474 passed 00:07:40.733 Test: blob_delete ...passed 00:07:40.733 Test: blob_resize_test ...[2024-07-25 18:34:41.110832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:40.733 passed 00:07:40.733 Test: blob_resize_thin_test ...passed 00:07:40.733 Test: channel_ops ...passed 00:07:40.733 Test: blob_super ...passed 00:07:40.992 Test: blob_rw_verify_iov ...passed 00:07:40.992 Test: blob_unmap ...passed 00:07:40.992 Test: blob_iter ...passed 00:07:40.992 Test: blob_parse_md ...passed 00:07:41.251 Test: bs_load_pending_removal ...passed 00:07:41.251 Test: bs_unload ...[2024-07-25 18:34:41.615965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:41.251 passed 00:07:41.251 Test: bs_usable_clusters ...passed 00:07:41.251 Test: blob_crc ...[2024-07-25 18:34:41.726434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:41.251 [2024-07-25 18:34:41.726591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:41.251 passed 00:07:41.251 Test: blob_flags ...passed 00:07:41.510 Test: bs_version ...passed 00:07:41.510 Test: blob_set_xattrs_test ...[2024-07-25 18:34:41.892780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:41.510 [2024-07-25 18:34:41.892920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:41.510 passed 00:07:41.510 Test: blob_thin_prov_alloc ...passed 00:07:41.770 Test: blob_insert_cluster_msg_test ...passed 00:07:41.770 Test: blob_thin_prov_rw ...passed 00:07:41.770 Test: blob_thin_prov_rle ...passed 00:07:41.770 Test: blob_thin_prov_rw_iov ...passed 00:07:42.029 Test: blob_snapshot_rw ...passed 00:07:42.029 Test: blob_snapshot_rw_iov ...passed 00:07:42.288 Test: blob_inflate_rw ...passed 00:07:42.288 Test: blob_snapshot_freeze_io ...passed 00:07:42.288 Test: blob_operation_split_rw ...passed 00:07:42.547 Test: blob_operation_split_rw_iov ...passed 00:07:42.547 Test: blob_simultaneous_operations ...[2024-07-25 18:34:43.028221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:42.547 [2024-07-25 18:34:43.028294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:42.547 [2024-07-25 18:34:43.028828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:42.547 [2024-07-25 18:34:43.028877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:42.547 [2024-07-25 18:34:43.032074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:42.547 [2024-07-25 18:34:43.032122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:42.547 [2024-07-25 18:34:43.032229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:42.547 [2024-07-25 18:34:43.032245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:42.547 passed 00:07:42.806 Test: blob_persist_test ...passed 00:07:42.806 Test: blob_decouple_snapshot ...passed 00:07:42.806 Test: blob_seek_io_unit ...passed 00:07:42.806 Test: blob_nested_freezes ...passed 00:07:42.806 Test: blob_clone_resize ...passed 00:07:43.064 Test: blob_shallow_copy ...[2024-07-25 18:34:43.407011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:43.064 [2024-07-25 18:34:43.407400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:43.064 [2024-07-25 18:34:43.407662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:43.064 passed 00:07:43.064 Suite: blob_blob_copy_noextent 00:07:43.064 Test: blob_write ...passed 00:07:43.064 Test: blob_read ...passed 00:07:43.064 Test: blob_rw_verify ...passed 00:07:43.323 Test: blob_rw_verify_iov_nomem ...passed 00:07:43.323 Test: blob_rw_iov_read_only ...passed 00:07:43.323 Test: blob_xattr ...passed 00:07:43.323 Test: blob_dirty_shutdown ...passed 00:07:43.323 Test: blob_is_degraded ...passed 00:07:43.323 Suite: blob_esnap_bs_copy_noextent 00:07:43.582 Test: blob_esnap_create ...passed 00:07:43.582 Test: blob_esnap_thread_add_remove ...passed 00:07:43.582 Test: blob_esnap_clone_snapshot ...passed 00:07:43.582 Test: blob_esnap_clone_inflate ...passed 00:07:43.842 Test: blob_esnap_clone_decouple ...passed 00:07:43.842 Test: blob_esnap_clone_reload ...passed 00:07:43.842 Test: blob_esnap_hotplug ...passed 00:07:43.842 Test: blob_set_parent ...[2024-07-25 18:34:44.311459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:43.842 [2024-07-25 18:34:44.311573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:43.842 [2024-07-25 18:34:44.311684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:43.842 [2024-07-25 18:34:44.311727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:43.842 [2024-07-25 18:34:44.312098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:43.842 passed 00:07:43.842 Test: blob_set_external_parent ...[2024-07-25 18:34:44.367223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:43.842 [2024-07-25 18:34:44.367340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:43.842 [2024-07-25 18:34:44.367382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:43.842 [2024-07-25 18:34:44.367692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:43.842 passed 00:07:43.842 Suite: blob_copy_extent 00:07:43.842 Test: blob_init ...[2024-07-25 18:34:44.386245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:43.842 passed 00:07:44.101 Test: blob_thin_provision ...passed 00:07:44.101 Test: blob_read_only ...passed 00:07:44.101 Test: bs_load ...[2024-07-25 18:34:44.460541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:44.101 passed 00:07:44.101 Test: bs_load_custom_cluster_size ...passed 00:07:44.101 Test: bs_load_after_failed_grow ...passed 00:07:44.101 Test: bs_cluster_sz ...[2024-07-25 18:34:44.499141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:44.101 [2024-07-25 18:34:44.499373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:44.101 [2024-07-25 18:34:44.499412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:44.101 passed 00:07:44.101 Test: bs_resize_md ...passed 00:07:44.101 Test: bs_destroy ...passed 00:07:44.101 Test: bs_type ...passed 00:07:44.101 Test: bs_super_block ...passed 00:07:44.101 Test: bs_test_recover_cluster_count ...passed 00:07:44.101 Test: bs_grow_live ...passed 00:07:44.101 Test: bs_grow_live_no_space ...passed 00:07:44.101 Test: bs_test_grow ...passed 00:07:44.101 Test: blob_serialize_test ...passed 00:07:44.101 Test: super_block_crc ...passed 00:07:44.360 Test: blob_thin_prov_write_count_io ...passed 00:07:44.360 Test: blob_thin_prov_unmap_cluster ...passed 00:07:44.360 Test: bs_load_iter_test ...passed 00:07:44.360 Test: blob_relations ...[2024-07-25 18:34:44.771254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.360 [2024-07-25 18:34:44.771394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.360 [2024-07-25 18:34:44.772003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.360 [2024-07-25 18:34:44.772051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.360 passed 00:07:44.360 Test: blob_relations2 ...[2024-07-25 18:34:44.792513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.360 [2024-07-25 18:34:44.792595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.360 [2024-07-25 18:34:44.792644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.361 [2024-07-25 18:34:44.792662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.361 [2024-07-25 18:34:44.793567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.361 [2024-07-25 18:34:44.793612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.361 [2024-07-25 18:34:44.793912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.361 [2024-07-25 18:34:44.793953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.361 passed 00:07:44.361 Test: blob_relations3 ...passed 00:07:44.620 Test: blobstore_clean_power_failure ...passed 00:07:44.620 Test: blob_delete_snapshot_power_failure ...[2024-07-25 18:34:45.050247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:44.620 [2024-07-25 18:34:45.069743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:44.620 [2024-07-25 18:34:45.089065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:44.620 [2024-07-25 18:34:45.089149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:44.620 [2024-07-25 18:34:45.089191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.620 [2024-07-25 18:34:45.108576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:44.620 [2024-07-25 18:34:45.108673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:44.620 [2024-07-25 18:34:45.108697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:44.620 [2024-07-25 18:34:45.108723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.620 [2024-07-25 18:34:45.128105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:44.620 [2024-07-25 18:34:45.130991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:44.620 [2024-07-25 18:34:45.131050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:44.620 [2024-07-25 18:34:45.131092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.620 [2024-07-25 18:34:45.150390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:44.620 [2024-07-25 18:34:45.150492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.620 [2024-07-25 18:34:45.169783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:44.620 [2024-07-25 18:34:45.169898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.620 [2024-07-25 18:34:45.189188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:44.620 [2024-07-25 18:34:45.189295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.880 passed 00:07:44.880 Test: blob_create_snapshot_power_failure ...[2024-07-25 18:34:45.246655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:44.880 [2024-07-25 18:34:45.265664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:44.880 [2024-07-25 18:34:45.303331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:44.880 [2024-07-25 18:34:45.322631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:44.880 passed 00:07:44.880 Test: blob_io_unit ...passed 00:07:44.880 Test: blob_io_unit_compatibility ...passed 00:07:44.880 Test: blob_ext_md_pages ...passed 00:07:45.139 Test: blob_esnap_io_4096_4096 ...passed 00:07:45.139 Test: blob_esnap_io_512_512 ...passed 00:07:45.139 Test: blob_esnap_io_4096_512 ...passed 00:07:45.139 Test: blob_esnap_io_512_4096 ...passed 00:07:45.139 Test: blob_esnap_clone_resize ...passed 00:07:45.139 Suite: blob_bs_copy_extent 00:07:45.139 Test: blob_open ...passed 00:07:45.398 Test: blob_create ...[2024-07-25 18:34:45.740116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:45.398 passed 00:07:45.398 Test: blob_create_loop ...passed 00:07:45.398 Test: blob_create_fail ...[2024-07-25 18:34:45.878094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:45.398 passed 00:07:45.398 Test: blob_create_internal ...passed 00:07:45.657 Test: blob_create_zero_extent ...passed 00:07:45.657 Test: blob_snapshot ...passed 00:07:45.657 Test: blob_clone ...passed 00:07:45.657 Test: blob_inflate ...[2024-07-25 18:34:46.160476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:45.657 passed 00:07:45.916 Test: blob_delete ...passed 00:07:45.916 Test: blob_resize_test ...[2024-07-25 18:34:46.266691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:45.916 passed 00:07:45.916 Test: blob_resize_thin_test ...passed 00:07:45.916 Test: channel_ops ...passed 00:07:45.916 Test: blob_super ...passed 00:07:46.175 Test: blob_rw_verify_iov ...passed 00:07:46.175 Test: blob_unmap ...passed 00:07:46.175 Test: blob_iter ...passed 00:07:46.175 Test: blob_parse_md ...passed 00:07:46.175 Test: bs_load_pending_removal ...passed 00:07:46.434 Test: bs_unload ...[2024-07-25 18:34:46.763931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:46.434 passed 00:07:46.434 Test: bs_usable_clusters ...passed 00:07:46.434 Test: blob_crc ...[2024-07-25 18:34:46.873991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:46.434 [2024-07-25 18:34:46.874126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:46.434 passed 00:07:46.434 Test: blob_flags ...passed 00:07:46.434 Test: bs_version ...passed 00:07:46.693 Test: blob_set_xattrs_test ...[2024-07-25 18:34:47.039558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:46.693 [2024-07-25 18:34:47.039703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:46.693 passed 00:07:46.693 Test: blob_thin_prov_alloc ...passed 00:07:46.693 Test: blob_insert_cluster_msg_test ...passed 00:07:46.952 Test: blob_thin_prov_rw ...passed 00:07:46.952 Test: blob_thin_prov_rle ...passed 00:07:46.952 Test: blob_thin_prov_rw_iov ...passed 00:07:46.952 Test: blob_snapshot_rw ...passed 00:07:47.211 Test: blob_snapshot_rw_iov ...passed 00:07:47.211 Test: blob_inflate_rw ...passed 00:07:47.469 Test: blob_snapshot_freeze_io ...passed 00:07:47.470 Test: blob_operation_split_rw ...passed 00:07:47.728 Test: blob_operation_split_rw_iov ...passed 00:07:47.728 Test: blob_simultaneous_operations ...[2024-07-25 18:34:48.152609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.728 [2024-07-25 18:34:48.152695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.728 [2024-07-25 18:34:48.153246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.728 [2024-07-25 18:34:48.153299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.728 [2024-07-25 18:34:48.156455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.728 [2024-07-25 18:34:48.156508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.728 [2024-07-25 18:34:48.156606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.728 [2024-07-25 18:34:48.156623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.728 passed 00:07:47.728 Test: blob_persist_test ...passed 00:07:47.987 Test: blob_decouple_snapshot ...passed 00:07:47.987 Test: blob_seek_io_unit ...passed 00:07:47.987 Test: blob_nested_freezes ...passed 00:07:47.987 Test: blob_clone_resize ...passed 00:07:47.987 Test: blob_shallow_copy ...[2024-07-25 18:34:48.534461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:47.987 [2024-07-25 18:34:48.534837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:47.987 [2024-07-25 18:34:48.535119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:47.987 passed 00:07:47.987 Suite: blob_blob_copy_extent 00:07:48.246 Test: blob_write ...passed 00:07:48.246 Test: blob_read ...passed 00:07:48.246 Test: blob_rw_verify ...passed 00:07:48.246 Test: blob_rw_verify_iov_nomem ...passed 00:07:48.504 Test: blob_rw_iov_read_only ...passed 00:07:48.504 Test: blob_xattr ...passed 00:07:48.504 Test: blob_dirty_shutdown ...passed 00:07:48.504 Test: blob_is_degraded ...passed 00:07:48.504 Suite: blob_esnap_bs_copy_extent 00:07:48.504 Test: blob_esnap_create ...passed 00:07:48.763 Test: blob_esnap_thread_add_remove ...passed 00:07:48.763 Test: blob_esnap_clone_snapshot ...passed 00:07:48.763 Test: blob_esnap_clone_inflate ...passed 00:07:48.763 Test: blob_esnap_clone_decouple ...passed 00:07:49.022 Test: blob_esnap_clone_reload ...passed 00:07:49.022 Test: blob_esnap_hotplug ...passed 00:07:49.022 Test: blob_set_parent ...[2024-07-25 18:34:49.441616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:49.022 [2024-07-25 18:34:49.441739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:49.022 [2024-07-25 18:34:49.441885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:49.022 [2024-07-25 18:34:49.441927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:49.022 [2024-07-25 18:34:49.442399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:49.022 passed 00:07:49.022 Test: blob_set_external_parent ...[2024-07-25 18:34:49.498165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:49.022 [2024-07-25 18:34:49.498304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:49.022 [2024-07-25 18:34:49.498351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:49.022 [2024-07-25 18:34:49.498777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:49.022 passed 00:07:49.022 00:07:49.022 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.022 suites 16 16 n/a 0 0 00:07:49.022 tests 376 376 376 0 0 00:07:49.022 asserts 143973 143973 143973 0 n/a 00:07:49.022 00:07:49.023 Elapsed time = 20.743 seconds 00:07:49.282 18:34:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:49.282 00:07:49.282 00:07:49.282 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.282 http://cunit.sourceforge.net/ 00:07:49.282 00:07:49.282 00:07:49.282 Suite: blob_bdev 00:07:49.282 Test: create_bs_dev ...passed 00:07:49.282 Test: create_bs_dev_ro ...[2024-07-25 18:34:49.646997] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:49.282 passed 00:07:49.282 Test: create_bs_dev_rw ...passed 00:07:49.282 Test: claim_bs_dev ...[2024-07-25 18:34:49.647617] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:49.282 passed 00:07:49.282 Test: claim_bs_dev_ro ...passed 00:07:49.282 Test: deferred_destroy_refs ...passed 00:07:49.282 Test: deferred_destroy_channels ...passed 00:07:49.282 Test: deferred_destroy_threads ...passed 00:07:49.282 00:07:49.282 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.282 suites 1 1 n/a 0 0 00:07:49.282 tests 8 8 8 0 0 00:07:49.282 asserts 119 119 119 0 n/a 00:07:49.282 00:07:49.282 Elapsed time = 0.001 seconds 00:07:49.282 18:34:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:49.282 00:07:49.282 00:07:49.282 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.282 http://cunit.sourceforge.net/ 00:07:49.282 00:07:49.282 00:07:49.282 Suite: tree 00:07:49.282 Test: blobfs_tree_op_test ...passed 00:07:49.282 00:07:49.282 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.282 suites 1 1 n/a 0 0 00:07:49.282 tests 1 1 1 0 0 00:07:49.282 asserts 27 27 27 0 n/a 00:07:49.282 00:07:49.282 Elapsed time = 0.000 seconds 00:07:49.282 18:34:49 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:49.282 00:07:49.282 00:07:49.282 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.282 http://cunit.sourceforge.net/ 00:07:49.282 00:07:49.282 00:07:49.282 Suite: blobfs_async_ut 00:07:49.282 Test: fs_init ...passed 00:07:49.541 Test: fs_open ...passed 00:07:49.541 Test: fs_create ...passed 00:07:49.541 Test: fs_truncate ...passed 00:07:49.541 Test: fs_rename ...[2024-07-25 18:34:49.940104] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:49.541 passed 00:07:49.541 Test: fs_rw_async ...passed 00:07:49.541 Test: fs_writev_readv_async ...passed 00:07:49.541 Test: tree_find_buffer_ut ...passed 00:07:49.541 Test: channel_ops ...passed 00:07:49.541 Test: channel_ops_sync ...passed 00:07:49.541 00:07:49.541 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.541 suites 1 1 n/a 0 0 00:07:49.541 tests 10 10 10 0 0 00:07:49.541 asserts 292 292 292 0 n/a 00:07:49.541 00:07:49.541 Elapsed time = 0.275 seconds 00:07:49.541 18:34:50 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:49.800 00:07:49.800 00:07:49.800 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.800 http://cunit.sourceforge.net/ 00:07:49.800 00:07:49.800 00:07:49.800 Suite: blobfs_sync_ut 00:07:49.800 Test: cache_read_after_write ...[2024-07-25 18:34:50.210038] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:49.800 passed 00:07:49.800 Test: file_length ...passed 00:07:49.800 Test: append_write_to_extend_blob ...passed 00:07:49.800 Test: partial_buffer ...passed 00:07:49.800 Test: cache_write_null_buffer ...passed 00:07:49.800 Test: fs_create_sync ...passed 00:07:49.800 Test: fs_rename_sync ...passed 00:07:50.059 Test: cache_append_no_cache ...passed 00:07:50.059 Test: fs_delete_file_without_close ...passed 00:07:50.059 00:07:50.059 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.059 suites 1 1 n/a 0 0 00:07:50.059 tests 9 9 9 0 0 00:07:50.059 asserts 345 345 345 0 n/a 00:07:50.059 00:07:50.059 Elapsed time = 0.541 seconds 00:07:50.059 18:34:50 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:50.059 00:07:50.059 00:07:50.059 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.059 http://cunit.sourceforge.net/ 00:07:50.059 00:07:50.059 00:07:50.059 Suite: blobfs_bdev_ut 00:07:50.059 Test: spdk_blobfs_bdev_detect_test ...[2024-07-25 18:34:50.468098] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:50.059 passed 00:07:50.059 Test: spdk_blobfs_bdev_create_test ...passed 00:07:50.059 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:50.059 00:07:50.059 [2024-07-25 18:34:50.468484] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:50.059 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.059 suites 1 1 n/a 0 0 00:07:50.059 tests 3 3 3 0 0 00:07:50.059 asserts 9 9 9 0 n/a 00:07:50.059 00:07:50.059 Elapsed time = 0.001 seconds 00:07:50.059 00:07:50.059 real 0m21.744s 00:07:50.059 user 0m21.007s 00:07:50.059 sys 0m1.029s 00:07:50.059 18:34:50 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.059 18:34:50 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:07:50.059 ************************************ 00:07:50.059 END TEST unittest_blob_blobfs 00:07:50.059 ************************************ 00:07:50.059 18:34:50 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:07:50.059 18:34:50 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.059 18:34:50 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.059 18:34:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:50.059 ************************************ 00:07:50.059 START TEST unittest_event 00:07:50.059 ************************************ 00:07:50.059 18:34:50 unittest.unittest_event -- common/autotest_common.sh@1125 -- # unittest_event 00:07:50.059 18:34:50 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:50.059 00:07:50.059 00:07:50.059 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.059 http://cunit.sourceforge.net/ 00:07:50.059 00:07:50.059 00:07:50.059 Suite: app_suite 00:07:50.059 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:50.059 00:07:50.059 CPU options: 00:07:50.059 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:50.059 (like [0,1,10]) 00:07:50.059 --lcores lcore to CPU mapping list. The list is in the format:app_ut: invalid option -- 'z' 00:07:50.059 00:07:50.059 [<,lcores[@CPUs]>...] 00:07:50.059 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:50.059 Within the group, '-' is used for range separator, 00:07:50.059 ',' is used for single number separator. 00:07:50.059 '( )' can be omitted for single element group, 00:07:50.059 '@' can be omitted if cpus and lcores have the same value 00:07:50.059 --disable-cpumask-locks Disable CPU core lock files. 00:07:50.059 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:50.059 pollers in the app support interrupt mode) 00:07:50.059 -p, --main-core main (primary) core for DPDK 00:07:50.059 00:07:50.059 Configuration options: 00:07:50.059 -c, --config, --json JSON config file 00:07:50.059 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:50.059 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:50.059 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:50.059 --rpcs-allowed comma-separated list of permitted RPCS 00:07:50.059 --json-ignore-init-errors don't exit on invalid config entry 00:07:50.059 00:07:50.059 Memory options: 00:07:50.059 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:50.059 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:50.059 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:50.059 -R, --huge-unlink unlink huge files after initialization 00:07:50.059 -n, --mem-channels number of memory channels used for DPDK 00:07:50.059 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:50.059 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:50.059 --no-huge run without using hugepages 00:07:50.059 -i, --shm-id shared memory ID (optional) 00:07:50.059 -g, --single-file-segments force creating just one hugetlbfs file 00:07:50.059 00:07:50.059 PCI options: 00:07:50.059 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:50.059 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:50.059 -u, --no-pci disable PCI access 00:07:50.059 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:50.059 00:07:50.059 Log options: 00:07:50.059 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:50.059 --silence-noticelog disable notice level logging to stderr 00:07:50.059 00:07:50.059 Trace options: 00:07:50.059 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:50.059 setting 0 to disable trace (default 32768) 00:07:50.059 Tracepoints vary in size and can use more than one trace entry. 00:07:50.059 -e, --tpoint-group [:] 00:07:50.059 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:50.059 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:50.059 a tracepoint group. First tpoint inside a group can be enabled by 00:07:50.060 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:50.060 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:50.060 in /include/spdk_internal/trace_defs.h 00:07:50.060 00:07:50.060 Other options: 00:07:50.060 -h, --help show this usage 00:07:50.060 -v, --version print SPDK version 00:07:50.060 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:50.060 --env-context Opaque context for use of the env implementation 00:07:50.060 app_ut [options] 00:07:50.060 00:07:50.060 CPU options: 00:07:50.060 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:50.060 (like [0,1,10]) 00:07:50.060 --lcores lcore to CPU mapping list. The list is in the format: 00:07:50.060 [<,lcores[@CPUs]>...] 00:07:50.060 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:50.060 Within the group, '-' is used for range separator, 00:07:50.060 ',' is used for single number separator. 00:07:50.060 '( )' can be omitted for single element group, 00:07:50.060 '@' can be omitted if cpus and lcores have the same value 00:07:50.060 --disable-cpumask-locks Disable CPU core lock files. 00:07:50.060 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:50.060 pollers in the app support interrupt mode) 00:07:50.060 -p, --main-core main (primary) core for DPDK 00:07:50.060 00:07:50.060 Configuration options: 00:07:50.060 -c, --config, --json JSON config file 00:07:50.060 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:50.060 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:50.060 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:50.060 --rpcs-allowed comma-separated list of permitted RPCS 00:07:50.060 --json-ignore-init-errors don't exit on invalid config entry 00:07:50.060 00:07:50.060 Memory options: 00:07:50.060 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:50.060 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:50.060 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:50.060 -R, --huge-unlink unlink huge files after initialization 00:07:50.060 -n, --mem-channels number of memory channels used for DPDK 00:07:50.060 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:50.060 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:50.060 --no-huge run without using hugepages 00:07:50.060 -i, --shm-id shared memory ID (optional) 00:07:50.060 -g, --single-file-segments force creating just one hugetlbfs file 00:07:50.060 00:07:50.060 PCI options: 00:07:50.060 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:50.060 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:50.060 -u, --no-pci disable PCI access 00:07:50.060 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:50.060 00:07:50.060 Log options: 00:07:50.060 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:50.060 --silence-noticelog disable notice level logging to stderr 00:07:50.060 00:07:50.060 Trace options: 00:07:50.060 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:50.060 setting 0 to disable trace (default 32768) 00:07:50.060 Tracepoints vary in size and can use more than one trace entry. 00:07:50.060 -e, --tpoint-group [:] 00:07:50.060 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:50.060 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:50.060 a tracepoint group. First tpoint inside a group can be enabled by 00:07:50.060 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:50.060 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:50.060 in /include/spdk_internal/trace_defs.h 00:07:50.060 00:07:50.060 Other options: 00:07:50.060 -h, --help show this usage 00:07:50.060 -v, --version print SPDK version 00:07:50.060 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:50.060 --env-context Opaque context for use of the env implementation 00:07:50.060 app_ut: unrecognized option '--test-long-opt' 00:07:50.060 [2024-07-25 18:34:50.575680] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:50.060 [2024-07-25 18:34:50.575956] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:50.060 app_ut [options] 00:07:50.060 00:07:50.060 CPU options: 00:07:50.060 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:50.060 (like [0,1,10]) 00:07:50.060 --lcores lcore to CPU mapping list. The list is in the format: 00:07:50.060 [<,lcores[@CPUs]>...] 00:07:50.060 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:50.060 Within the group, '-' is used for range separator, 00:07:50.060 ',' is used for single number separator. 00:07:50.060 '( )' can be omitted for single element group, 00:07:50.060 '@' can be omitted if cpus and lcores have the same value 00:07:50.060 --disable-cpumask-locks Disable CPU core lock files. 00:07:50.060 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:50.060 pollers in the app support interrupt mode) 00:07:50.060 -p, --main-core main (primary) core for DPDK 00:07:50.060 00:07:50.060 Configuration options: 00:07:50.060 -c, --config, --json JSON config file 00:07:50.060 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:50.060 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:50.060 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:50.060 --rpcs-allowed comma-separated list of permitted RPCS 00:07:50.060 --json-ignore-init-errors don't exit on invalid config entry 00:07:50.060 00:07:50.060 Memory options: 00:07:50.060 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:50.060 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:50.060 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:50.060 -R, --huge-unlink unlink huge files after initialization 00:07:50.060 -n, --mem-channels number of memory channels used for DPDK 00:07:50.060 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:50.060 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:50.060 --no-huge run without using hugepages 00:07:50.060 -i, --shm-id shared memory ID (optional) 00:07:50.060 -g, --single-file-segments force creating just one hugetlbfs file 00:07:50.060 00:07:50.060 PCI options: 00:07:50.060 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:50.060 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:50.060 -u, --no-pci disable PCI access 00:07:50.060 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:50.060 00:07:50.060 Log options: 00:07:50.060 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:50.060 --silence-noticelog disable notice level logging to stderr 00:07:50.060 00:07:50.060 Trace options: 00:07:50.060 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:50.060 setting 0 to disable trace (default 32768) 00:07:50.060 Tracepoints vary in size and can use more than one trace entry. 00:07:50.060 -e, --tpoint-group [:] 00:07:50.060 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:50.060 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:50.060 a tracepoint group. First tpoint inside a group can be enabled by 00:07:50.060 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:50.060 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:50.060 in /include/spdk_internal/trace_defs.h 00:07:50.060 00:07:50.060 Other options: 00:07:50.060 -h, --help show this usage 00:07:50.060 -v, --version print SPDK version 00:07:50.060 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:50.060 --env-context Opaque context for use of the env implementation 00:07:50.060 passed 00:07:50.060 00:07:50.060 [2024-07-25 18:34:50.576178] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:50.060 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.060 suites 1 1 n/a 0 0 00:07:50.060 tests 1 1 1 0 0 00:07:50.060 asserts 8 8 8 0 n/a 00:07:50.060 00:07:50.060 Elapsed time = 0.001 seconds 00:07:50.060 18:34:50 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:50.060 00:07:50.060 00:07:50.060 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.060 http://cunit.sourceforge.net/ 00:07:50.060 00:07:50.060 00:07:50.060 Suite: app_suite 00:07:50.060 Test: test_create_reactor ...passed 00:07:50.060 Test: test_init_reactors ...passed 00:07:50.060 Test: test_event_call ...passed 00:07:50.060 Test: test_schedule_thread ...passed 00:07:50.061 Test: test_reschedule_thread ...passed 00:07:50.061 Test: test_bind_thread ...passed 00:07:50.061 Test: test_for_each_reactor ...passed 00:07:50.320 Test: test_reactor_stats ...passed 00:07:50.320 Test: test_scheduler ...passed 00:07:50.320 Test: test_governor ...passed 00:07:50.320 00:07:50.320 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.320 suites 1 1 n/a 0 0 00:07:50.320 tests 10 10 10 0 0 00:07:50.320 asserts 344 344 344 0 n/a 00:07:50.320 00:07:50.320 Elapsed time = 0.021 seconds 00:07:50.320 00:07:50.320 real 0m0.112s 00:07:50.320 user 0m0.043s 00:07:50.320 sys 0m0.069s 00:07:50.320 18:34:50 unittest.unittest_event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.320 18:34:50 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:07:50.320 ************************************ 00:07:50.320 END TEST unittest_event 00:07:50.320 ************************************ 00:07:50.320 18:34:50 unittest -- unit/unittest.sh@235 -- # uname -s 00:07:50.320 18:34:50 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:07:50.320 18:34:50 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:07:50.320 18:34:50 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.320 18:34:50 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.320 18:34:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:50.320 ************************************ 00:07:50.320 START TEST unittest_ftl 00:07:50.320 ************************************ 00:07:50.320 18:34:50 unittest.unittest_ftl -- common/autotest_common.sh@1125 -- # unittest_ftl 00:07:50.320 18:34:50 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:50.320 00:07:50.320 00:07:50.320 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.320 http://cunit.sourceforge.net/ 00:07:50.320 00:07:50.320 00:07:50.320 Suite: ftl_band_suite 00:07:50.320 Test: test_band_block_offset_from_addr_base ...passed 00:07:50.320 Test: test_band_block_offset_from_addr_offset ...passed 00:07:50.320 Test: test_band_addr_from_block_offset ...passed 00:07:50.320 Test: test_band_set_addr ...passed 00:07:50.579 Test: test_invalidate_addr ...passed 00:07:50.579 Test: test_next_xfer_addr ...passed 00:07:50.579 00:07:50.579 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.579 suites 1 1 n/a 0 0 00:07:50.579 tests 6 6 6 0 0 00:07:50.579 asserts 30356 30356 30356 0 n/a 00:07:50.579 00:07:50.579 Elapsed time = 0.172 seconds 00:07:50.579 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:50.579 00:07:50.579 00:07:50.579 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.579 http://cunit.sourceforge.net/ 00:07:50.579 00:07:50.579 00:07:50.579 Suite: ftl_bitmap 00:07:50.579 Test: test_ftl_bitmap_create ...[2024-07-25 18:34:51.034798] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:50.579 passed 00:07:50.579 Test: test_ftl_bitmap_get ...[2024-07-25 18:34:51.035051] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:50.579 passed 00:07:50.579 Test: test_ftl_bitmap_set ...passed 00:07:50.579 Test: test_ftl_bitmap_clear ...passed 00:07:50.579 Test: test_ftl_bitmap_find_first_set ...passed 00:07:50.579 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:50.579 Test: test_ftl_bitmap_count_set ...passed 00:07:50.579 00:07:50.579 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.579 suites 1 1 n/a 0 0 00:07:50.579 tests 7 7 7 0 0 00:07:50.579 asserts 137 137 137 0 n/a 00:07:50.579 00:07:50.579 Elapsed time = 0.001 seconds 00:07:50.579 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:50.579 00:07:50.579 00:07:50.579 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.579 http://cunit.sourceforge.net/ 00:07:50.579 00:07:50.579 00:07:50.579 Suite: ftl_io_suite 00:07:50.579 Test: test_completion ...passed 00:07:50.579 Test: test_multiple_ios ...passed 00:07:50.579 00:07:50.579 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.579 suites 1 1 n/a 0 0 00:07:50.579 tests 2 2 2 0 0 00:07:50.579 asserts 47 47 47 0 n/a 00:07:50.579 00:07:50.579 Elapsed time = 0.004 seconds 00:07:50.579 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:50.579 00:07:50.579 00:07:50.579 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.579 http://cunit.sourceforge.net/ 00:07:50.579 00:07:50.579 00:07:50.579 Suite: ftl_mngt 00:07:50.579 Test: test_next_step ...passed 00:07:50.579 Test: test_continue_step ...passed 00:07:50.579 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:50.579 Test: test_fail_step ...passed 00:07:50.579 Test: test_mngt_call_and_call_rollback ...passed 00:07:50.579 Test: test_nested_process_failure ...passed 00:07:50.579 Test: test_call_init_success ...passed 00:07:50.579 Test: test_call_init_failure ...passed 00:07:50.579 00:07:50.579 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.579 suites 1 1 n/a 0 0 00:07:50.579 tests 8 8 8 0 0 00:07:50.579 asserts 196 196 196 0 n/a 00:07:50.579 00:07:50.579 Elapsed time = 0.001 seconds 00:07:50.579 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:50.579 00:07:50.579 00:07:50.579 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.579 http://cunit.sourceforge.net/ 00:07:50.579 00:07:50.579 00:07:50.579 Suite: ftl_mempool 00:07:50.579 Test: test_ftl_mempool_create ...passed 00:07:50.579 Test: test_ftl_mempool_get_put ...passed 00:07:50.579 00:07:50.579 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.579 suites 1 1 n/a 0 0 00:07:50.579 tests 2 2 2 0 0 00:07:50.579 asserts 36 36 36 0 n/a 00:07:50.579 00:07:50.579 Elapsed time = 0.000 seconds 00:07:50.838 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:50.838 00:07:50.838 00:07:50.838 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.838 http://cunit.sourceforge.net/ 00:07:50.838 00:07:50.838 00:07:50.838 Suite: ftl_addr64_suite 00:07:50.838 Test: test_addr_cached ...passed 00:07:50.838 00:07:50.838 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.838 suites 1 1 n/a 0 0 00:07:50.838 tests 1 1 1 0 0 00:07:50.838 asserts 1536 1536 1536 0 n/a 00:07:50.838 00:07:50.838 Elapsed time = 0.000 seconds 00:07:50.838 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:50.838 00:07:50.838 00:07:50.838 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.838 http://cunit.sourceforge.net/ 00:07:50.838 00:07:50.838 00:07:50.838 Suite: ftl_sb 00:07:50.838 Test: test_sb_crc_v2 ...passed 00:07:50.838 Test: test_sb_crc_v3 ...passed 00:07:50.838 Test: test_sb_v3_md_layout ...[2024-07-25 18:34:51.212422] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:50.838 [2024-07-25 18:34:51.212778] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:50.838 [2024-07-25 18:34:51.212846] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:50.838 [2024-07-25 18:34:51.212898] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:50.838 [2024-07-25 18:34:51.212941] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:50.838 [2024-07-25 18:34:51.213038] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:50.838 [2024-07-25 18:34:51.213082] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:50.839 [2024-07-25 18:34:51.213146] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:50.839 [2024-07-25 18:34:51.213232] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:50.839 [2024-07-25 18:34:51.213287] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:50.839 [2024-07-25 18:34:51.213340] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:50.839 passed 00:07:50.839 Test: test_sb_v5_md_layout ...passed 00:07:50.839 00:07:50.839 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.839 suites 1 1 n/a 0 0 00:07:50.839 tests 4 4 4 0 0 00:07:50.839 asserts 160 160 160 0 n/a 00:07:50.839 00:07:50.839 Elapsed time = 0.002 seconds 00:07:50.839 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:50.839 00:07:50.839 00:07:50.839 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.839 http://cunit.sourceforge.net/ 00:07:50.839 00:07:50.839 00:07:50.839 Suite: ftl_layout_upgrade 00:07:50.839 Test: test_l2p_upgrade ...passed 00:07:50.839 00:07:50.839 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.839 suites 1 1 n/a 0 0 00:07:50.839 tests 1 1 1 0 0 00:07:50.839 asserts 152 152 152 0 n/a 00:07:50.839 00:07:50.839 Elapsed time = 0.000 seconds 00:07:50.839 18:34:51 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:07:50.839 00:07:50.839 00:07:50.839 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.839 http://cunit.sourceforge.net/ 00:07:50.839 00:07:50.839 00:07:50.839 Suite: ftl_p2l_suite 00:07:50.839 Test: test_p2l_num_pages ...passed 00:07:51.406 Test: test_ckpt_issue ...passed 00:07:51.974 Test: test_persist_band_p2l ...passed 00:07:52.583 Test: test_clean_restore_p2l ...passed 00:07:53.970 Test: test_dirty_restore_p2l ...passed 00:07:53.970 00:07:53.970 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.970 suites 1 1 n/a 0 0 00:07:53.970 tests 5 5 5 0 0 00:07:53.970 asserts 10020 10020 10020 0 n/a 00:07:53.970 00:07:53.970 Elapsed time = 3.142 seconds 00:07:53.970 00:07:53.970 real 0m3.708s 00:07:53.970 user 0m1.057s 00:07:53.970 sys 0m2.655s 00:07:53.970 18:34:54 unittest.unittest_ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.970 18:34:54 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:07:53.970 ************************************ 00:07:53.970 END TEST unittest_ftl 00:07:53.970 ************************************ 00:07:53.970 18:34:54 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:53.970 18:34:54 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.970 18:34:54 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.970 18:34:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:53.970 ************************************ 00:07:53.970 START TEST unittest_accel 00:07:53.970 ************************************ 00:07:53.970 18:34:54 unittest.unittest_accel -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:54.230 00:07:54.230 00:07:54.230 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.230 http://cunit.sourceforge.net/ 00:07:54.230 00:07:54.230 00:07:54.230 Suite: accel_sequence 00:07:54.230 Test: test_sequence_fill_copy ...passed 00:07:54.230 Test: test_sequence_abort ...passed 00:07:54.230 Test: test_sequence_append_error ...passed 00:07:54.230 Test: test_sequence_completion_error ...[2024-07-25 18:34:54.559670] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f4cddefe7c0 00:07:54.230 [2024-07-25 18:34:54.560091] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1959:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f4cddefe7c0 00:07:54.230 [2024-07-25 18:34:54.560209] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f4cddefe7c0 00:07:54.230 [2024-07-25 18:34:54.560278] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1869:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f4cddefe7c0 00:07:54.230 passed 00:07:54.230 Test: test_sequence_decompress ...passed 00:07:54.230 Test: test_sequence_reverse ...passed 00:07:54.230 Test: test_sequence_copy_elision ...passed 00:07:54.230 Test: test_sequence_accel_buffers ...passed 00:07:54.230 Test: test_sequence_memory_domain ...[2024-07-25 18:34:54.574255] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1761:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:54.230 passed 00:07:54.230 Test: test_sequence_module_memory_domain ...[2024-07-25 18:34:54.574494] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1800:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:54.230 passed 00:07:54.230 Test: test_sequence_crypto ...passed 00:07:54.230 Test: test_sequence_driver ...[2024-07-25 18:34:54.582657] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1908:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f4cdd1957c0 using driver: ut 00:07:54.230 [2024-07-25 18:34:54.582809] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1972:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f4cdd1957c0 through driver: ut 00:07:54.230 passed 00:07:54.230 Test: test_sequence_same_iovs ...passed 00:07:54.230 Test: test_sequence_crc32 ...passed 00:07:54.230 Suite: accel 00:07:54.230 Test: test_spdk_accel_task_complete ...passed 00:07:54.230 Test: test_get_task ...passed 00:07:54.230 Test: test_spdk_accel_submit_copy ...passed 00:07:54.230 Test: test_spdk_accel_submit_dualcast ...[2024-07-25 18:34:54.588878] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:54.230 [2024-07-25 18:34:54.588960] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:54.230 passed 00:07:54.230 Test: test_spdk_accel_submit_compare ...passed 00:07:54.230 Test: test_spdk_accel_submit_fill ...passed 00:07:54.230 Test: test_spdk_accel_submit_crc32c ...passed 00:07:54.230 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:54.230 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:54.230 Test: test_spdk_accel_submit_xor ...passed 00:07:54.230 Test: test_spdk_accel_module_find_by_name ...passed 00:07:54.230 Test: test_spdk_accel_module_register ...passed 00:07:54.230 00:07:54.230 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.230 suites 2 2 n/a 0 0 00:07:54.230 tests 26 26 26 0 0 00:07:54.230 asserts 830 830 830 0 n/a 00:07:54.230 00:07:54.231 Elapsed time = 0.042 seconds 00:07:54.231 00:07:54.231 real 0m0.105s 00:07:54.231 user 0m0.044s 00:07:54.231 sys 0m0.061s 00:07:54.231 18:34:54 unittest.unittest_accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.231 18:34:54 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.231 ************************************ 00:07:54.231 END TEST unittest_accel 00:07:54.231 ************************************ 00:07:54.231 18:34:54 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:54.231 18:34:54 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.231 18:34:54 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.231 18:34:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:54.231 ************************************ 00:07:54.231 START TEST unittest_ioat 00:07:54.231 ************************************ 00:07:54.231 18:34:54 unittest.unittest_ioat -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:54.231 00:07:54.231 00:07:54.231 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.231 http://cunit.sourceforge.net/ 00:07:54.231 00:07:54.231 00:07:54.231 Suite: ioat 00:07:54.231 Test: ioat_state_check ...passed 00:07:54.231 00:07:54.231 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.231 suites 1 1 n/a 0 0 00:07:54.231 tests 1 1 1 0 0 00:07:54.231 asserts 32 32 32 0 n/a 00:07:54.231 00:07:54.231 Elapsed time = 0.000 seconds 00:07:54.231 00:07:54.231 real 0m0.041s 00:07:54.231 user 0m0.013s 00:07:54.231 sys 0m0.028s 00:07:54.231 18:34:54 unittest.unittest_ioat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.231 ************************************ 00:07:54.231 END TEST unittest_ioat 00:07:54.231 18:34:54 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:07:54.231 ************************************ 00:07:54.491 18:34:54 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:54.491 18:34:54 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:54.491 18:34:54 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.491 18:34:54 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.491 18:34:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 ************************************ 00:07:54.491 START TEST unittest_idxd_user 00:07:54.491 ************************************ 00:07:54.491 18:34:54 unittest.unittest_idxd_user -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:54.491 00:07:54.491 00:07:54.491 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.491 http://cunit.sourceforge.net/ 00:07:54.491 00:07:54.491 00:07:54.491 Suite: idxd_user 00:07:54.491 Test: test_idxd_wait_cmd ...[2024-07-25 18:34:54.850031] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:54.491 passed 00:07:54.491 Test: test_idxd_reset_dev ...[2024-07-25 18:34:54.850325] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:54.491 [2024-07-25 18:34:54.850478] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:54.491 [2024-07-25 18:34:54.850533] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:54.491 passed 00:07:54.491 Test: test_idxd_group_config ...passed 00:07:54.491 Test: test_idxd_wq_config ...passed 00:07:54.491 00:07:54.491 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.491 suites 1 1 n/a 0 0 00:07:54.491 tests 4 4 4 0 0 00:07:54.491 asserts 20 20 20 0 n/a 00:07:54.491 00:07:54.491 Elapsed time = 0.001 seconds 00:07:54.491 00:07:54.491 real 0m0.040s 00:07:54.491 user 0m0.028s 00:07:54.491 sys 0m0.013s 00:07:54.491 18:34:54 unittest.unittest_idxd_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.491 18:34:54 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 ************************************ 00:07:54.491 END TEST unittest_idxd_user 00:07:54.491 ************************************ 00:07:54.491 18:34:54 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:07:54.491 18:34:54 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.491 18:34:54 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.491 18:34:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:54.491 ************************************ 00:07:54.491 START TEST unittest_iscsi 00:07:54.491 ************************************ 00:07:54.491 18:34:54 unittest.unittest_iscsi -- common/autotest_common.sh@1125 -- # unittest_iscsi 00:07:54.491 18:34:54 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:54.491 00:07:54.491 00:07:54.491 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.491 http://cunit.sourceforge.net/ 00:07:54.491 00:07:54.491 00:07:54.491 Suite: conn_suite 00:07:54.491 Test: read_task_split_in_order_case ...passed 00:07:54.491 Test: read_task_split_reverse_order_case ...passed 00:07:54.491 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:54.491 Test: process_non_read_task_completion_test ...passed 00:07:54.491 Test: free_tasks_on_connection ...passed 00:07:54.491 Test: free_tasks_with_queued_datain ...passed 00:07:54.491 Test: abort_queued_datain_task_test ...passed 00:07:54.491 Test: abort_queued_datain_tasks_test ...passed 00:07:54.491 00:07:54.491 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.491 suites 1 1 n/a 0 0 00:07:54.492 tests 8 8 8 0 0 00:07:54.492 asserts 230 230 230 0 n/a 00:07:54.492 00:07:54.492 Elapsed time = 0.000 seconds 00:07:54.492 18:34:54 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:54.492 00:07:54.492 00:07:54.492 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.492 http://cunit.sourceforge.net/ 00:07:54.492 00:07:54.492 00:07:54.492 Suite: iscsi_suite 00:07:54.492 Test: param_negotiation_test ...passed 00:07:54.492 Test: list_negotiation_test ...passed 00:07:54.492 Test: parse_valid_test ...passed 00:07:54.492 Test: parse_invalid_test ...[2024-07-25 18:34:55.016633] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:54.492 [2024-07-25 18:34:55.016970] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:54.492 [2024-07-25 18:34:55.017039] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:07:54.492 [2024-07-25 18:34:55.017137] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:54.492 [2024-07-25 18:34:55.017338] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:54.492 [2024-07-25 18:34:55.017433] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:54.492 [2024-07-25 18:34:55.017605] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:54.492 passed 00:07:54.492 00:07:54.492 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.492 suites 1 1 n/a 0 0 00:07:54.492 tests 4 4 4 0 0 00:07:54.492 asserts 161 161 161 0 n/a 00:07:54.492 00:07:54.492 Elapsed time = 0.006 seconds 00:07:54.492 18:34:55 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:54.492 00:07:54.492 00:07:54.492 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.492 http://cunit.sourceforge.net/ 00:07:54.492 00:07:54.492 00:07:54.492 Suite: iscsi_target_node_suite 00:07:54.492 Test: add_lun_test_cases ...[2024-07-25 18:34:55.059028] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:54.492 [2024-07-25 18:34:55.059386] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:54.492 passed 00:07:54.492 Test: allow_any_allowed ...[2024-07-25 18:34:55.059488] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:54.492 [2024-07-25 18:34:55.059540] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:54.492 [2024-07-25 18:34:55.059583] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:54.492 passed 00:07:54.492 Test: allow_ipv6_allowed ...passed 00:07:54.492 Test: allow_ipv6_denied ...passed 00:07:54.492 Test: allow_ipv6_invalid ...passed 00:07:54.492 Test: allow_ipv4_allowed ...passed 00:07:54.492 Test: allow_ipv4_denied ...passed 00:07:54.492 Test: allow_ipv4_invalid ...passed 00:07:54.492 Test: node_access_allowed ...passed 00:07:54.492 Test: node_access_denied_by_empty_netmask ...passed 00:07:54.492 Test: node_access_multi_initiator_groups_cases ...passed 00:07:54.492 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:54.492 Test: chap_param_test_cases ...[2024-07-25 18:34:55.060087] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:54.492 [2024-07-25 18:34:55.060137] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:54.492 [2024-07-25 18:34:55.060205] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:54.492 [2024-07-25 18:34:55.060238] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:54.492 [2024-07-25 18:34:55.060288] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:54.492 passed 00:07:54.492 00:07:54.492 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.492 suites 1 1 n/a 0 0 00:07:54.492 tests 13 13 13 0 0 00:07:54.492 asserts 50 50 50 0 n/a 00:07:54.492 00:07:54.492 Elapsed time = 0.001 seconds 00:07:54.752 18:34:55 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:54.752 00:07:54.752 00:07:54.752 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.752 http://cunit.sourceforge.net/ 00:07:54.752 00:07:54.752 00:07:54.752 Suite: iscsi_suite 00:07:54.752 Test: op_login_check_target_test ...passed 00:07:54.752 Test: op_login_session_normal_test ...[2024-07-25 18:34:55.109508] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:07:54.752 [2024-07-25 18:34:55.109877] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:54.752 [2024-07-25 18:34:55.109926] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:54.752 [2024-07-25 18:34:55.109966] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:54.752 [2024-07-25 18:34:55.110026] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:54.752 [2024-07-25 18:34:55.110128] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:54.752 passed 00:07:54.752 Test: maxburstlength_test ...[2024-07-25 18:34:55.110236] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:54.752 [2024-07-25 18:34:55.110293] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:54.752 [2024-07-25 18:34:55.110579] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:54.752 [2024-07-25 18:34:55.110639] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:54.752 passed 00:07:54.752 Test: underflow_for_read_transfer_test ...passed 00:07:54.752 Test: underflow_for_zero_read_transfer_test ...passed 00:07:54.752 Test: underflow_for_request_sense_test ...passed 00:07:54.752 Test: underflow_for_check_condition_test ...passed 00:07:54.752 Test: add_transfer_task_test ...passed 00:07:54.752 Test: get_transfer_task_test ...passed 00:07:54.752 Test: del_transfer_task_test ...passed 00:07:54.752 Test: clear_all_transfer_tasks_test ...passed 00:07:54.752 Test: build_iovs_test ...passed 00:07:54.752 Test: build_iovs_with_md_test ...passed 00:07:54.752 Test: pdu_hdr_op_login_test ...[2024-07-25 18:34:55.111998] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:54.752 [2024-07-25 18:34:55.112129] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:54.752 [2024-07-25 18:34:55.112210] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:54.752 passed 00:07:54.752 Test: pdu_hdr_op_text_test ...passed 00:07:54.752 Test: pdu_hdr_op_logout_test ...[2024-07-25 18:34:55.112330] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:54.752 [2024-07-25 18:34:55.112417] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:54.752 [2024-07-25 18:34:55.112462] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:54.752 [2024-07-25 18:34:55.112535] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:54.752 passed 00:07:54.752 Test: pdu_hdr_op_scsi_test ...[2024-07-25 18:34:55.112720] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:54.752 [2024-07-25 18:34:55.112759] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:54.752 [2024-07-25 18:34:55.112804] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:54.752 [2024-07-25 18:34:55.112897] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:54.752 [2024-07-25 18:34:55.112983] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:54.752 [2024-07-25 18:34:55.113151] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:54.752 passed 00:07:54.753 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-25 18:34:55.113254] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:54.753 [2024-07-25 18:34:55.113349] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:54.753 passed 00:07:54.753 Test: pdu_hdr_op_nopout_test ...[2024-07-25 18:34:55.113572] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:54.753 [2024-07-25 18:34:55.113644] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:54.753 [2024-07-25 18:34:55.113685] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:54.753 [2024-07-25 18:34:55.113718] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:54.753 passed 00:07:54.753 Test: pdu_hdr_op_data_test ...[2024-07-25 18:34:55.113757] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:54.753 [2024-07-25 18:34:55.113839] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:54.753 [2024-07-25 18:34:55.113909] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:54.753 [2024-07-25 18:34:55.113959] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:54.753 [2024-07-25 18:34:55.114020] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:54.753 [2024-07-25 18:34:55.114107] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:54.753 [2024-07-25 18:34:55.114143] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:54.753 passed 00:07:54.753 Test: empty_text_with_cbit_test ...passed 00:07:54.753 Test: pdu_payload_read_test ...[2024-07-25 18:34:55.115682] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:54.753 passed 00:07:54.753 Test: data_out_pdu_sequence_test ...passed 00:07:54.753 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:54.753 00:07:54.753 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.753 suites 1 1 n/a 0 0 00:07:54.753 tests 24 24 24 0 0 00:07:54.753 asserts 150253 150253 150253 0 n/a 00:07:54.753 00:07:54.753 Elapsed time = 0.013 seconds 00:07:54.753 18:34:55 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:54.753 00:07:54.753 00:07:54.753 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.753 http://cunit.sourceforge.net/ 00:07:54.753 00:07:54.753 00:07:54.753 Suite: init_grp_suite 00:07:54.753 Test: create_initiator_group_success_case ...passed 00:07:54.753 Test: find_initiator_group_success_case ...passed 00:07:54.753 Test: register_initiator_group_twice_case ...passed 00:07:54.753 Test: add_initiator_name_success_case ...passed 00:07:54.753 Test: add_initiator_name_fail_case ...[2024-07-25 18:34:55.166606] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:54.753 passed 00:07:54.753 Test: delete_all_initiator_names_success_case ...passed 00:07:54.753 Test: add_netmask_success_case ...passed 00:07:54.753 Test: add_netmask_fail_case ...passed 00:07:54.753 Test: delete_all_netmasks_success_case ...passed 00:07:54.753 Test: initiator_name_overwrite_all_to_any_case ...[2024-07-25 18:34:55.167176] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:54.753 passed 00:07:54.753 Test: netmask_overwrite_all_to_any_case ...passed 00:07:54.753 Test: add_delete_initiator_names_case ...passed 00:07:54.753 Test: add_duplicated_initiator_names_case ...passed 00:07:54.753 Test: delete_nonexisting_initiator_names_case ...passed 00:07:54.753 Test: add_delete_netmasks_case ...passed 00:07:54.753 Test: add_duplicated_netmasks_case ...passed 00:07:54.753 Test: delete_nonexisting_netmasks_case ...passed 00:07:54.753 00:07:54.753 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.753 suites 1 1 n/a 0 0 00:07:54.753 tests 17 17 17 0 0 00:07:54.753 asserts 108 108 108 0 n/a 00:07:54.753 00:07:54.753 Elapsed time = 0.001 seconds 00:07:54.753 18:34:55 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:54.753 00:07:54.753 00:07:54.753 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.753 http://cunit.sourceforge.net/ 00:07:54.753 00:07:54.753 00:07:54.753 Suite: portal_grp_suite 00:07:54.753 Test: portal_create_ipv4_normal_case ...passed 00:07:54.753 Test: portal_create_ipv6_normal_case ...passed 00:07:54.753 Test: portal_create_ipv4_wildcard_case ...passed 00:07:54.753 Test: portal_create_ipv6_wildcard_case ...passed 00:07:54.753 Test: portal_create_twice_case ...passed 00:07:54.753 Test: portal_grp_register_unregister_case ...passed 00:07:54.753 Test: portal_grp_register_twice_case ...passed 00:07:54.753 Test: portal_grp_add_delete_case ...[2024-07-25 18:34:55.214034] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:54.753 passed 00:07:54.753 Test: portal_grp_add_delete_twice_case ...passed 00:07:54.753 00:07:54.753 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.753 suites 1 1 n/a 0 0 00:07:54.753 tests 9 9 9 0 0 00:07:54.753 asserts 44 44 44 0 n/a 00:07:54.753 00:07:54.753 Elapsed time = 0.003 seconds 00:07:54.753 00:07:54.753 real 0m0.295s 00:07:54.753 user 0m0.143s 00:07:54.753 sys 0m0.157s 00:07:54.753 18:34:55 unittest.unittest_iscsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.753 18:34:55 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:07:54.753 ************************************ 00:07:54.753 END TEST unittest_iscsi 00:07:54.753 ************************************ 00:07:54.753 18:34:55 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:07:54.753 18:34:55 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.753 18:34:55 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.753 18:34:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:54.753 ************************************ 00:07:54.753 START TEST unittest_json 00:07:54.753 ************************************ 00:07:54.753 18:34:55 unittest.unittest_json -- common/autotest_common.sh@1125 -- # unittest_json 00:07:54.753 18:34:55 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:55.013 00:07:55.013 00:07:55.013 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.013 http://cunit.sourceforge.net/ 00:07:55.013 00:07:55.013 00:07:55.013 Suite: json 00:07:55.013 Test: test_parse_literal ...passed 00:07:55.013 Test: test_parse_string_simple ...passed 00:07:55.013 Test: test_parse_string_control_chars ...passed 00:07:55.013 Test: test_parse_string_utf8 ...passed 00:07:55.013 Test: test_parse_string_escapes_twochar ...passed 00:07:55.013 Test: test_parse_string_escapes_unicode ...passed 00:07:55.013 Test: test_parse_number ...passed 00:07:55.013 Test: test_parse_array ...passed 00:07:55.013 Test: test_parse_object ...passed 00:07:55.013 Test: test_parse_nesting ...passed 00:07:55.014 Test: test_parse_comment ...passed 00:07:55.014 00:07:55.014 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.014 suites 1 1 n/a 0 0 00:07:55.014 tests 11 11 11 0 0 00:07:55.014 asserts 1516 1516 1516 0 n/a 00:07:55.014 00:07:55.014 Elapsed time = 0.002 seconds 00:07:55.014 18:34:55 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:55.014 00:07:55.014 00:07:55.014 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.014 http://cunit.sourceforge.net/ 00:07:55.014 00:07:55.014 00:07:55.014 Suite: json 00:07:55.014 Test: test_strequal ...passed 00:07:55.014 Test: test_num_to_uint16 ...passed 00:07:55.014 Test: test_num_to_int32 ...passed 00:07:55.014 Test: test_num_to_uint64 ...passed 00:07:55.014 Test: test_decode_object ...passed 00:07:55.014 Test: test_decode_array ...passed 00:07:55.014 Test: test_decode_bool ...passed 00:07:55.014 Test: test_decode_uint16 ...passed 00:07:55.014 Test: test_decode_int32 ...passed 00:07:55.014 Test: test_decode_uint32 ...passed 00:07:55.014 Test: test_decode_uint64 ...passed 00:07:55.014 Test: test_decode_string ...passed 00:07:55.014 Test: test_decode_uuid ...passed 00:07:55.014 Test: test_find ...passed 00:07:55.014 Test: test_find_array ...passed 00:07:55.014 Test: test_iterating ...passed 00:07:55.014 Test: test_free_object ...passed 00:07:55.014 00:07:55.014 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.014 suites 1 1 n/a 0 0 00:07:55.014 tests 17 17 17 0 0 00:07:55.014 asserts 236 236 236 0 n/a 00:07:55.014 00:07:55.014 Elapsed time = 0.001 seconds 00:07:55.014 18:34:55 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:55.014 00:07:55.014 00:07:55.014 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.014 http://cunit.sourceforge.net/ 00:07:55.014 00:07:55.014 00:07:55.014 Suite: json 00:07:55.014 Test: test_write_literal ...passed 00:07:55.014 Test: test_write_string_simple ...passed 00:07:55.014 Test: test_write_string_escapes ...passed 00:07:55.014 Test: test_write_string_utf16le ...passed 00:07:55.014 Test: test_write_number_int32 ...passed 00:07:55.014 Test: test_write_number_uint32 ...passed 00:07:55.014 Test: test_write_number_uint128 ...passed 00:07:55.014 Test: test_write_string_number_uint128 ...passed 00:07:55.014 Test: test_write_number_int64 ...passed 00:07:55.014 Test: test_write_number_uint64 ...passed 00:07:55.014 Test: test_write_number_double ...passed 00:07:55.014 Test: test_write_uuid ...passed 00:07:55.014 Test: test_write_array ...passed 00:07:55.014 Test: test_write_object ...passed 00:07:55.014 Test: test_write_nesting ...passed 00:07:55.014 Test: test_write_val ...passed 00:07:55.014 00:07:55.014 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.014 suites 1 1 n/a 0 0 00:07:55.014 tests 16 16 16 0 0 00:07:55.014 asserts 918 918 918 0 n/a 00:07:55.014 00:07:55.014 Elapsed time = 0.005 seconds 00:07:55.014 18:34:55 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:55.014 00:07:55.014 00:07:55.014 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.014 http://cunit.sourceforge.net/ 00:07:55.014 00:07:55.014 00:07:55.014 Suite: jsonrpc 00:07:55.014 Test: test_parse_request ...passed 00:07:55.014 Test: test_parse_request_streaming ...passed 00:07:55.014 00:07:55.014 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.014 suites 1 1 n/a 0 0 00:07:55.014 tests 2 2 2 0 0 00:07:55.014 asserts 289 289 289 0 n/a 00:07:55.014 00:07:55.014 Elapsed time = 0.005 seconds 00:07:55.014 00:07:55.014 real 0m0.166s 00:07:55.014 user 0m0.070s 00:07:55.014 sys 0m0.098s 00:07:55.014 18:34:55 unittest.unittest_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.014 18:34:55 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:07:55.014 ************************************ 00:07:55.014 END TEST unittest_json 00:07:55.014 ************************************ 00:07:55.014 18:34:55 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:07:55.014 18:34:55 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.014 18:34:55 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.014 18:34:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:55.014 ************************************ 00:07:55.014 START TEST unittest_rpc 00:07:55.014 ************************************ 00:07:55.014 18:34:55 unittest.unittest_rpc -- common/autotest_common.sh@1125 -- # unittest_rpc 00:07:55.014 18:34:55 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:55.014 00:07:55.014 00:07:55.014 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.014 http://cunit.sourceforge.net/ 00:07:55.014 00:07:55.014 00:07:55.014 Suite: rpc 00:07:55.014 Test: test_jsonrpc_handler ...passed 00:07:55.014 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:55.014 Test: test_rpc_get_methods ...passed 00:07:55.014 Test: test_rpc_spdk_get_version ...passed 00:07:55.014 Test: test_spdk_rpc_listen_close ...passed 00:07:55.014 Test: test_rpc_run_multiple_servers ...[2024-07-25 18:34:55.569548] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:55.014 passed 00:07:55.014 00:07:55.014 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.014 suites 1 1 n/a 0 0 00:07:55.014 tests 6 6 6 0 0 00:07:55.014 asserts 23 23 23 0 n/a 00:07:55.014 00:07:55.014 Elapsed time = 0.001 seconds 00:07:55.274 00:07:55.274 real 0m0.046s 00:07:55.274 user 0m0.014s 00:07:55.274 sys 0m0.032s 00:07:55.274 18:34:55 unittest.unittest_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.274 18:34:55 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.274 ************************************ 00:07:55.274 END TEST unittest_rpc 00:07:55.274 ************************************ 00:07:55.274 18:34:55 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:55.274 18:34:55 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.274 18:34:55 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.274 18:34:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:55.274 ************************************ 00:07:55.274 START TEST unittest_notify 00:07:55.274 ************************************ 00:07:55.274 18:34:55 unittest.unittest_notify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:55.274 00:07:55.274 00:07:55.274 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.274 http://cunit.sourceforge.net/ 00:07:55.274 00:07:55.274 00:07:55.274 Suite: app_suite 00:07:55.274 Test: notify ...passed 00:07:55.274 00:07:55.274 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.274 suites 1 1 n/a 0 0 00:07:55.274 tests 1 1 1 0 0 00:07:55.274 asserts 13 13 13 0 n/a 00:07:55.274 00:07:55.274 Elapsed time = 0.000 seconds 00:07:55.274 00:07:55.274 real 0m0.044s 00:07:55.274 user 0m0.031s 00:07:55.274 sys 0m0.013s 00:07:55.274 18:34:55 unittest.unittest_notify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.274 18:34:55 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:07:55.274 ************************************ 00:07:55.274 END TEST unittest_notify 00:07:55.274 ************************************ 00:07:55.274 18:34:55 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:07:55.274 18:34:55 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.274 18:34:55 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.274 18:34:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:55.274 ************************************ 00:07:55.274 START TEST unittest_nvme 00:07:55.274 ************************************ 00:07:55.274 18:34:55 unittest.unittest_nvme -- common/autotest_common.sh@1125 -- # unittest_nvme 00:07:55.274 18:34:55 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:55.274 00:07:55.274 00:07:55.274 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.274 http://cunit.sourceforge.net/ 00:07:55.274 00:07:55.274 00:07:55.274 Suite: nvme 00:07:55.274 Test: test_opc_data_transfer ...passed 00:07:55.274 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:55.274 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:55.274 Test: test_trid_parse_and_compare ...[2024-07-25 18:34:55.807801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:55.274 [2024-07-25 18:34:55.808123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:55.274 [2024-07-25 18:34:55.808232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:55.274 [2024-07-25 18:34:55.808281] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:55.274 [2024-07-25 18:34:55.808326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:07:55.274 [2024-07-25 18:34:55.808452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:55.274 passed 00:07:55.275 Test: test_trid_trtype_str ...passed 00:07:55.275 Test: test_trid_adrfam_str ...passed 00:07:55.275 Test: test_nvme_ctrlr_probe ...[2024-07-25 18:34:55.808741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:55.275 passed 00:07:55.275 Test: test_spdk_nvme_probe ...[2024-07-25 18:34:55.808860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:55.275 [2024-07-25 18:34:55.808904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:55.275 [2024-07-25 18:34:55.809016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:55.275 [2024-07-25 18:34:55.809068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:55.275 passed 00:07:55.275 Test: test_spdk_nvme_connect ...[2024-07-25 18:34:55.809181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:55.275 [2024-07-25 18:34:55.809636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:55.275 passed 00:07:55.275 Test: test_nvme_ctrlr_probe_internal ...[2024-07-25 18:34:55.809846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:55.275 passed 00:07:55.275 Test: test_nvme_init_controllers ...passed[2024-07-25 18:34:55.809893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:55.275 [2024-07-25 18:34:55.809996] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:55.275 00:07:55.275 Test: test_nvme_driver_init ...[2024-07-25 18:34:55.810109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:55.275 [2024-07-25 18:34:55.810160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:55.535 [2024-07-25 18:34:55.926934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:55.535 passed 00:07:55.535 Test: test_spdk_nvme_detach ...passed 00:07:55.535 Test: test_nvme_completion_poll_cb ...passed 00:07:55.535 Test: test_nvme_user_copy_cmd_complete ...[2024-07-25 18:34:55.927169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:55.535 passed 00:07:55.535 Test: test_nvme_allocate_request_null ...passed 00:07:55.535 Test: test_nvme_allocate_request ...passed 00:07:55.535 Test: test_nvme_free_request ...passed 00:07:55.535 Test: test_nvme_allocate_request_user_copy ...passed 00:07:55.535 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:55.535 Test: test_nvme_request_check_timeout ...passed 00:07:55.535 Test: test_nvme_wait_for_completion ...passed 00:07:55.535 Test: test_spdk_nvme_parse_func ...passed 00:07:55.535 Test: test_spdk_nvme_detach_async ...passed 00:07:55.535 Test: test_nvme_parse_addr ...[2024-07-25 18:34:55.928206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1635:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:55.535 passed 00:07:55.535 00:07:55.535 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.535 suites 1 1 n/a 0 0 00:07:55.535 tests 25 25 25 0 0 00:07:55.535 asserts 326 326 326 0 n/a 00:07:55.535 00:07:55.535 Elapsed time = 0.008 seconds 00:07:55.535 18:34:55 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:55.535 00:07:55.535 00:07:55.535 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.535 http://cunit.sourceforge.net/ 00:07:55.535 00:07:55.535 00:07:55.535 Suite: nvme_ctrlr 00:07:55.535 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-25 18:34:55.971963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 passed 00:07:55.535 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-25 18:34:55.973842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 passed 00:07:55.535 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-25 18:34:55.975269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 passed 00:07:55.535 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-25 18:34:55.976641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 passed 00:07:55.535 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-25 18:34:55.977944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 [2024-07-25 18:34:55.979266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 18:34:55.980650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 18:34:55.982165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:55.535 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-25 18:34:55.984761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 [2024-07-25 18:34:55.987340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 18:34:55.988674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:55.535 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-25 18:34:55.991437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 [2024-07-25 18:34:55.992844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 18:34:55.995449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4070:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:55.535 Test: test_nvme_ctrlr_init_delay ...[2024-07-25 18:34:55.998236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 passed 00:07:55.535 Test: test_alloc_io_qpair_rr_1 ...[2024-07-25 18:34:55.999676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 [2024-07-25 18:34:55.999968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:55.535 [2024-07-25 18:34:56.000271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:55.535 [2024-07-25 18:34:56.000385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:55.535 [2024-07-25 18:34:56.000463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:55.535 passed 00:07:55.535 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:55.535 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:55.535 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-25 18:34:56.000630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 passed 00:07:55.535 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-25 18:34:56.000932] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 [2024-07-25 18:34:56.001129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:55.535 passed 00:07:55.535 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-25 18:34:56.001566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4997:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:55.535 [2024-07-25 18:34:56.001846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5034:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:55.535 [2024-07-25 18:34:56.002031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5074:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:55.535 [2024-07-25 18:34:56.002172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5034:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:55.535 passed 00:07:55.535 Test: test_nvme_ctrlr_fail ...[2024-07-25 18:34:56.002296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:55.535 passed 00:07:55.535 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:55.535 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:55.535 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-25 18:34:56.002539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.535 passed 00:07:55.535 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:55.535 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-25 18:34:56.004007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.795 passed 00:07:55.795 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:55.795 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:55.795 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:55.795 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-25 18:34:56.350917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.795 passed 00:07:55.795 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-25 18:34:56.358757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.795 passed 00:07:55.795 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-25 18:34:56.359986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.795 [2024-07-25 18:34:56.360094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3006:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:55.795 passed 00:07:55.795 Test: test_alloc_io_qpair_fail ...[2024-07-25 18:34:56.361324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:55.795 passed 00:07:55.795 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:55.795 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:55.795 Test: test_nvme_ctrlr_set_state ...passed 00:07:55.795 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-25 18:34:56.361442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:55.795 [2024-07-25 18:34:56.361608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1550:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:55.795 [2024-07-25 18:34:56.361670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-25 18:34:56.387768] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-25 18:34:56.430866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_reset ...[2024-07-25 18:34:56.432485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_aer_callback ...[2024-07-25 18:34:56.433013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-25 18:34:56.434349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:56.055 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:56.055 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-25 18:34:56.436224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:56.055 Test: test_nvme_ctrlr_ana_resize ...[2024-07-25 18:34:56.437516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:56.055 Test: test_nvme_transport_ctrlr_ready ...passed 00:07:56.055 Test: test_nvme_ctrlr_disable ...[2024-07-25 18:34:56.439008] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4156:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:56.055 [2024-07-25 18:34:56.439058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4208:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:07:56.055 [2024-07-25 18:34:56.439111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4276:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:56.055 passed 00:07:56.055 00:07:56.055 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.055 suites 1 1 n/a 0 0 00:07:56.055 tests 44 44 44 0 0 00:07:56.055 asserts 10434 10434 10434 0 n/a 00:07:56.055 00:07:56.055 Elapsed time = 0.425 seconds 00:07:56.055 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:56.055 00:07:56.055 00:07:56.055 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.055 http://cunit.sourceforge.net/ 00:07:56.055 00:07:56.055 00:07:56.055 Suite: nvme_ctrlr_cmd 00:07:56.055 Test: test_get_log_pages ...passed 00:07:56.055 Test: test_set_feature_cmd ...passed 00:07:56.055 Test: test_set_feature_ns_cmd ...passed 00:07:56.055 Test: test_get_feature_cmd ...passed 00:07:56.055 Test: test_get_feature_ns_cmd ...passed 00:07:56.055 Test: test_abort_cmd ...passed 00:07:56.055 Test: test_set_host_id_cmds ...[2024-07-25 18:34:56.501379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:56.055 passed 00:07:56.055 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:56.055 Test: test_io_raw_cmd ...passed 00:07:56.055 Test: test_io_raw_cmd_with_md ...passed 00:07:56.055 Test: test_namespace_attach ...passed 00:07:56.055 Test: test_namespace_detach ...passed 00:07:56.055 Test: test_namespace_create ...passed 00:07:56.055 Test: test_namespace_delete ...passed 00:07:56.055 Test: test_doorbell_buffer_config ...passed 00:07:56.055 Test: test_format_nvme ...passed 00:07:56.055 Test: test_fw_commit ...passed 00:07:56.055 Test: test_fw_image_download ...passed 00:07:56.055 Test: test_sanitize ...passed 00:07:56.055 Test: test_directive ...passed 00:07:56.055 Test: test_nvme_request_add_abort ...passed 00:07:56.055 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:56.055 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:56.055 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:56.055 00:07:56.055 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.055 suites 1 1 n/a 0 0 00:07:56.055 tests 24 24 24 0 0 00:07:56.055 asserts 198 198 198 0 n/a 00:07:56.055 00:07:56.055 Elapsed time = 0.001 seconds 00:07:56.055 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:56.055 00:07:56.055 00:07:56.055 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.055 http://cunit.sourceforge.net/ 00:07:56.055 00:07:56.055 00:07:56.055 Suite: nvme_ctrlr_cmd 00:07:56.055 Test: test_geometry_cmd ...passed 00:07:56.055 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:56.055 00:07:56.055 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.055 suites 1 1 n/a 0 0 00:07:56.055 tests 2 2 2 0 0 00:07:56.055 asserts 7 7 7 0 n/a 00:07:56.055 00:07:56.055 Elapsed time = 0.000 seconds 00:07:56.055 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:56.055 00:07:56.055 00:07:56.055 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.055 http://cunit.sourceforge.net/ 00:07:56.056 00:07:56.056 00:07:56.056 Suite: nvme 00:07:56.056 Test: test_nvme_ns_construct ...passed 00:07:56.056 Test: test_nvme_ns_uuid ...passed 00:07:56.056 Test: test_nvme_ns_csi ...passed 00:07:56.056 Test: test_nvme_ns_data ...passed 00:07:56.056 Test: test_nvme_ns_set_identify_data ...passed 00:07:56.056 Test: test_spdk_nvme_ns_get_values ...passed 00:07:56.056 Test: test_spdk_nvme_ns_is_active ...passed 00:07:56.056 Test: spdk_nvme_ns_supports ...passed 00:07:56.056 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:56.056 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:56.056 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:56.056 Test: test_nvme_ns_find_id_desc ...passed 00:07:56.056 00:07:56.056 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.056 suites 1 1 n/a 0 0 00:07:56.056 tests 12 12 12 0 0 00:07:56.056 asserts 95 95 95 0 n/a 00:07:56.056 00:07:56.056 Elapsed time = 0.000 seconds 00:07:56.056 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:56.056 00:07:56.056 00:07:56.056 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.056 http://cunit.sourceforge.net/ 00:07:56.056 00:07:56.056 00:07:56.056 Suite: nvme_ns_cmd 00:07:56.056 Test: split_test ...passed 00:07:56.056 Test: split_test2 ...passed 00:07:56.056 Test: split_test3 ...passed 00:07:56.056 Test: split_test4 ...passed 00:07:56.056 Test: test_nvme_ns_cmd_flush ...passed 00:07:56.056 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:56.056 Test: test_nvme_ns_cmd_copy ...passed 00:07:56.056 Test: test_io_flags ...[2024-07-25 18:34:56.592750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:56.056 passed 00:07:56.056 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:56.056 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:56.056 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:56.056 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:56.056 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:56.056 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:56.056 Test: test_cmd_child_request ...passed 00:07:56.056 Test: test_nvme_ns_cmd_readv ...passed 00:07:56.056 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:56.056 Test: test_nvme_ns_cmd_writev ...passed 00:07:56.056 Test: test_nvme_ns_cmd_write_with_md ...[2024-07-25 18:34:56.593727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:56.056 passed 00:07:56.056 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:56.056 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:56.056 Test: test_nvme_ns_cmd_comparev ...passed 00:07:56.056 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:56.056 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:56.056 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:56.056 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:56.056 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:56.056 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:07:56.056 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-25 18:34:56.595285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:56.056 [2024-07-25 18:34:56.595368] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:56.056 passed 00:07:56.056 Test: test_nvme_ns_cmd_verify ...passed 00:07:56.056 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:56.056 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:56.056 00:07:56.056 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.056 suites 1 1 n/a 0 0 00:07:56.056 tests 32 32 32 0 0 00:07:56.056 asserts 550 550 550 0 n/a 00:07:56.056 00:07:56.056 Elapsed time = 0.004 seconds 00:07:56.056 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:56.316 00:07:56.316 00:07:56.316 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.316 http://cunit.sourceforge.net/ 00:07:56.316 00:07:56.316 00:07:56.316 Suite: nvme_ns_cmd 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:56.316 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:56.316 00:07:56.316 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.316 suites 1 1 n/a 0 0 00:07:56.316 tests 12 12 12 0 0 00:07:56.316 asserts 123 123 123 0 n/a 00:07:56.316 00:07:56.316 Elapsed time = 0.001 seconds 00:07:56.316 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:56.316 00:07:56.316 00:07:56.316 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.316 http://cunit.sourceforge.net/ 00:07:56.316 00:07:56.316 00:07:56.316 Suite: nvme_qpair 00:07:56.316 Test: test3 ...passed 00:07:56.316 Test: test_ctrlr_failed ...passed 00:07:56.316 Test: struct_packing ...passed 00:07:56.316 Test: test_nvme_qpair_process_completions ...[2024-07-25 18:34:56.660024] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:56.316 passed 00:07:56.316 Test: test_nvme_completion_is_retry ...[2024-07-25 18:34:56.660290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:56.316 [2024-07-25 18:34:56.660334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:56.316 [2024-07-25 18:34:56.660412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:56.316 passed 00:07:56.316 Test: test_get_status_string ...passed 00:07:56.316 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:56.316 Test: test_nvme_qpair_submit_request ...passed 00:07:56.316 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:56.316 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:56.316 Test: test_nvme_qpair_init_deinit ...passed 00:07:56.316 Test: test_nvme_get_sgl_print_info ...passed 00:07:56.316 00:07:56.316 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.316 suites 1 1 n/a 0 0 00:07:56.316 tests 12 12 12 0 0 00:07:56.316 asserts 154 154 154 0 n/a 00:07:56.316 00:07:56.316 Elapsed time = 0.001 seconds 00:07:56.316 [2024-07-25 18:34:56.660744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:56.316 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:56.316 00:07:56.316 00:07:56.316 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.316 http://cunit.sourceforge.net/ 00:07:56.316 00:07:56.316 00:07:56.316 Suite: nvme_pcie 00:07:56.317 Test: test_prp_list_append ...[2024-07-25 18:34:56.697899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:56.317 [2024-07-25 18:34:56.698236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:56.317 [2024-07-25 18:34:56.698295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1225:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:56.317 [2024-07-25 18:34:56.698566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:56.317 [2024-07-25 18:34:56.698675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:56.317 passed 00:07:56.317 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:56.317 Test: test_shadow_doorbell_update ...passed 00:07:56.317 Test: test_build_contig_hw_sgl_request ...passed 00:07:56.317 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:56.317 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:56.317 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:56.317 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-25 18:34:56.698848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:56.317 passed 00:07:56.317 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:56.317 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:56.317 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:07:56.317 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:07:56.317 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-25 18:34:56.698947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:56.317 [2024-07-25 18:34:56.699040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:56.317 passed 00:07:56.317 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:56.317 00:07:56.317 [2024-07-25 18:34:56.699099] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:56.317 [2024-07-25 18:34:56.699156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:56.317 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.317 suites 1 1 n/a 0 0 00:07:56.317 tests 14 14 14 0 0 00:07:56.317 asserts 235 235 235 0 n/a 00:07:56.317 00:07:56.317 Elapsed time = 0.001 seconds 00:07:56.317 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:56.317 00:07:56.317 00:07:56.317 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.317 http://cunit.sourceforge.net/ 00:07:56.317 00:07:56.317 00:07:56.317 Suite: nvme_ns_cmd 00:07:56.317 Test: nvme_poll_group_create_test ...passed 00:07:56.317 Test: nvme_poll_group_add_remove_test ...passed 00:07:56.317 Test: nvme_poll_group_process_completions ...passed 00:07:56.317 Test: nvme_poll_group_destroy_test ...passed 00:07:56.317 Test: nvme_poll_group_get_free_stats ...passed 00:07:56.317 00:07:56.317 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.317 suites 1 1 n/a 0 0 00:07:56.317 tests 5 5 5 0 0 00:07:56.317 asserts 75 75 75 0 n/a 00:07:56.317 00:07:56.317 Elapsed time = 0.001 seconds 00:07:56.317 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:56.317 00:07:56.317 00:07:56.317 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.317 http://cunit.sourceforge.net/ 00:07:56.317 00:07:56.317 00:07:56.317 Suite: nvme_quirks 00:07:56.317 Test: test_nvme_quirks_striping ...passed 00:07:56.317 00:07:56.317 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.317 suites 1 1 n/a 0 0 00:07:56.317 tests 1 1 1 0 0 00:07:56.317 asserts 5 5 5 0 n/a 00:07:56.317 00:07:56.317 Elapsed time = 0.000 seconds 00:07:56.317 18:34:56 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:56.317 00:07:56.317 00:07:56.317 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.317 http://cunit.sourceforge.net/ 00:07:56.317 00:07:56.317 00:07:56.317 Suite: nvme_tcp 00:07:56.317 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:56.317 Test: test_nvme_tcp_build_iovs ...passed 00:07:56.317 Test: test_nvme_tcp_build_sgl_request ...[2024-07-25 18:34:56.832568] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffe30f68d00, and the iovcnt=16, remaining_size=28672 00:07:56.317 passed 00:07:56.317 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:56.317 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:56.317 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:56.317 Test: test_nvme_tcp_req_get ...passed 00:07:56.317 Test: test_nvme_tcp_req_init ...passed 00:07:56.317 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:56.317 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:56.317 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-25 18:34:56.833252] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6aa40 is same with the state(6) to be set 00:07:56.317 passed 00:07:56.317 Test: test_nvme_tcp_alloc_reqs ...passed 00:07:56.317 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-25 18:34:56.833684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f69bf0 is same with the state(5) to be set 00:07:56.317 passed 00:07:56.317 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-25 18:34:56.833784] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffe30f6a780 00:07:56.317 [2024-07-25 18:34:56.833853] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:56.317 [2024-07-25 18:34:56.833976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 [2024-07-25 18:34:56.834049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:56.317 [2024-07-25 18:34:56.834158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 [2024-07-25 18:34:56.834218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:56.317 [2024-07-25 18:34:56.834263] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 [2024-07-25 18:34:56.834326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 [2024-07-25 18:34:56.834382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 [2024-07-25 18:34:56.834455] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 [2024-07-25 18:34:56.834503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 [2024-07-25 18:34:56.834560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f6a0b0 is same with the state(5) to be set 00:07:56.317 passed 00:07:56.317 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-25 18:34:56.834767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:56.317 [2024-07-25 18:34:56.834826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:56.317 [2024-07-25 18:34:56.835119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:56.317 passed 00:07:56.317 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:56.317 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:07:56.317 Test: test_nvme_tcp_icresp_handle ...[2024-07-25 18:34:56.835252] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe30f6a2c0): PDU Sequence Error 00:07:56.318 [2024-07-25 18:34:56.835335] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:56.318 [2024-07-25 18:34:56.835389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:56.318 [2024-07-25 18:34:56.835452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f69c00 is same with the state(5) to be set 00:07:56.318 [2024-07-25 18:34:56.835509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:56.318 passed 00:07:56.318 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-25 18:34:56.835569] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f69c00 is same with the state(5) to be set 00:07:56.318 [2024-07-25 18:34:56.835642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f69c00 is same with the state(0) to be set 00:07:56.318 [2024-07-25 18:34:56.835736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffe30f6a780): PDU Sequence Error 00:07:56.318 passed 00:07:56.318 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-25 18:34:56.835840] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffe30f68ec0 00:07:56.318 passed 00:07:56.318 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:56.318 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-25 18:34:56.836016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffe30f68540, errno=0, rc=0 00:07:56.318 [2024-07-25 18:34:56.836087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f68540 is same with the state(5) to be set 00:07:56.318 [2024-07-25 18:34:56.836171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe30f68540 is same with the state(5) to be set 00:07:56.318 [2024-07-25 18:34:56.836235] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe30f68540 (0): Success 00:07:56.318 [2024-07-25 18:34:56.836301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffe30f68540 (0): Success 00:07:56.318 passed 00:07:56.577 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-25 18:34:57.003385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:56.577 [2024-07-25 18:34:57.003522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:56.577 passed 00:07:56.577 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:56.577 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-25 18:34:57.003849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:56.577 [2024-07-25 18:34:57.003902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:56.577 passed 00:07:56.577 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-25 18:34:57.004155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:56.577 [2024-07-25 18:34:57.004227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:56.577 [2024-07-25 18:34:57.004362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:56.577 [2024-07-25 18:34:57.004442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:56.577 passed 00:07:56.577 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-25 18:34:57.004584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:07:56.577 [2024-07-25 18:34:57.004671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:56.577 [2024-07-25 18:34:57.004838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:07:56.577 [2024-07-25 18:34:57.004901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:56.577 passed 00:07:56.577 00:07:56.577 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.577 suites 1 1 n/a 0 0 00:07:56.577 tests 27 27 27 0 0 00:07:56.577 asserts 624 624 624 0 n/a 00:07:56.577 00:07:56.577 Elapsed time = 0.172 seconds 00:07:56.577 18:34:57 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:56.577 00:07:56.577 00:07:56.577 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.577 http://cunit.sourceforge.net/ 00:07:56.577 00:07:56.577 00:07:56.577 Suite: nvme_transport 00:07:56.577 Test: test_nvme_get_transport ...passed 00:07:56.577 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:56.577 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:56.577 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:56.577 Test: test_ctrlr_get_memory_domains ...passed 00:07:56.577 00:07:56.577 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.577 suites 1 1 n/a 0 0 00:07:56.577 tests 5 5 5 0 0 00:07:56.577 asserts 28 28 28 0 n/a 00:07:56.578 00:07:56.578 Elapsed time = 0.000 seconds 00:07:56.578 18:34:57 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:56.578 00:07:56.578 00:07:56.578 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.578 http://cunit.sourceforge.net/ 00:07:56.578 00:07:56.578 00:07:56.578 Suite: nvme_io_msg 00:07:56.578 Test: test_nvme_io_msg_send ...passed 00:07:56.578 Test: test_nvme_io_msg_process ...passed 00:07:56.578 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:56.578 00:07:56.578 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.578 suites 1 1 n/a 0 0 00:07:56.578 tests 3 3 3 0 0 00:07:56.578 asserts 56 56 56 0 n/a 00:07:56.578 00:07:56.578 Elapsed time = 0.000 seconds 00:07:56.578 18:34:57 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:56.578 00:07:56.578 00:07:56.578 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.578 http://cunit.sourceforge.net/ 00:07:56.578 00:07:56.578 00:07:56.578 Suite: nvme_pcie_common 00:07:56.578 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-25 18:34:57.141209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:56.578 passed 00:07:56.578 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:56.578 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:56.578 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-25 18:34:57.142226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 505:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:56.578 [2024-07-25 18:34:57.142377] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 458:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:56.578 [2024-07-25 18:34:57.142428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 552:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:56.578 passed 00:07:56.578 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:07:56.578 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-25 18:34:57.142947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:56.578 [2024-07-25 18:34:57.143016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:56.578 passed 00:07:56.578 00:07:56.578 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.578 suites 1 1 n/a 0 0 00:07:56.578 tests 6 6 6 0 0 00:07:56.578 asserts 148 148 148 0 n/a 00:07:56.578 00:07:56.578 Elapsed time = 0.002 seconds 00:07:56.837 18:34:57 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:56.838 00:07:56.838 00:07:56.838 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.838 http://cunit.sourceforge.net/ 00:07:56.838 00:07:56.838 00:07:56.838 Suite: nvme_fabric 00:07:56.838 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:56.838 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:56.838 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:56.838 Test: test_nvme_fabric_discover_probe ...passed 00:07:56.838 Test: test_nvme_fabric_qpair_connect ...[2024-07-25 18:34:57.188985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:56.838 passed 00:07:56.838 00:07:56.838 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.838 suites 1 1 n/a 0 0 00:07:56.838 tests 5 5 5 0 0 00:07:56.838 asserts 60 60 60 0 n/a 00:07:56.838 00:07:56.838 Elapsed time = 0.001 seconds 00:07:56.838 18:34:57 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:56.838 00:07:56.838 00:07:56.838 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.838 http://cunit.sourceforge.net/ 00:07:56.838 00:07:56.838 00:07:56.838 Suite: nvme_opal 00:07:56.838 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:56.838 Test: test_opal_add_short_atom_header ...passed 00:07:56.838 00:07:56.838 [2024-07-25 18:34:57.225159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:56.838 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.838 suites 1 1 n/a 0 0 00:07:56.838 tests 2 2 2 0 0 00:07:56.838 asserts 22 22 22 0 n/a 00:07:56.838 00:07:56.838 Elapsed time = 0.000 seconds 00:07:56.838 00:07:56.838 real 0m1.458s 00:07:56.838 user 0m0.684s 00:07:56.838 sys 0m0.632s 00:07:56.838 18:34:57 unittest.unittest_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.838 18:34:57 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.838 ************************************ 00:07:56.838 END TEST unittest_nvme 00:07:56.838 ************************************ 00:07:56.838 18:34:57 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:56.838 18:34:57 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.838 18:34:57 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.838 18:34:57 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:56.838 ************************************ 00:07:56.838 START TEST unittest_log 00:07:56.838 ************************************ 00:07:56.838 18:34:57 unittest.unittest_log -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:56.838 00:07:56.838 00:07:56.838 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.838 http://cunit.sourceforge.net/ 00:07:56.838 00:07:56.838 00:07:56.838 Suite: log 00:07:56.838 Test: log_test ...[2024-07-25 18:34:57.336812] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:07:56.838 [2024-07-25 18:34:57.337106] log_ut.c: 57:log_test: *DEBUG*: log test 00:07:56.838 passed 00:07:56.838 Test: deprecation ...log dump test: 00:07:56.838 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:56.838 spdk dump test: 00:07:56.838 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:56.838 spdk dump test: 00:07:56.838 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:56.838 00000010 65 20 63 68 61 72 73 e chars 00:07:57.776 passed 00:07:57.776 00:07:57.776 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.776 suites 1 1 n/a 0 0 00:07:57.776 tests 2 2 2 0 0 00:07:57.776 asserts 73 73 73 0 n/a 00:07:57.776 00:07:57.776 Elapsed time = 0.001 seconds 00:07:58.036 00:07:58.036 real 0m1.042s 00:07:58.036 user 0m0.012s 00:07:58.036 sys 0m0.030s 00:07:58.036 18:34:58 unittest.unittest_log -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.036 18:34:58 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:07:58.036 ************************************ 00:07:58.036 END TEST unittest_log 00:07:58.036 ************************************ 00:07:58.036 18:34:58 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:58.036 18:34:58 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.036 18:34:58 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.036 18:34:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.036 ************************************ 00:07:58.036 START TEST unittest_lvol 00:07:58.036 ************************************ 00:07:58.036 18:34:58 unittest.unittest_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:58.036 00:07:58.036 00:07:58.036 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.036 http://cunit.sourceforge.net/ 00:07:58.036 00:07:58.036 00:07:58.036 Suite: lvol 00:07:58.036 Test: lvs_init_unload_success ...[2024-07-25 18:34:58.464916] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:58.036 passed 00:07:58.036 Test: lvs_init_destroy_success ...[2024-07-25 18:34:58.466029] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:58.036 passed 00:07:58.036 Test: lvs_init_opts_success ...passed 00:07:58.036 Test: lvs_unload_lvs_is_null_fail ...[2024-07-25 18:34:58.466434] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:58.036 passed 00:07:58.036 Test: lvs_names ...[2024-07-25 18:34:58.466633] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:58.036 [2024-07-25 18:34:58.466813] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:58.036 [2024-07-25 18:34:58.467139] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:58.036 passed 00:07:58.036 Test: lvol_create_destroy_success ...passed 00:07:58.036 Test: lvol_create_fail ...[2024-07-25 18:34:58.467946] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:58.036 [2024-07-25 18:34:58.468219] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:58.036 passed 00:07:58.036 Test: lvol_destroy_fail ...[2024-07-25 18:34:58.468730] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:58.036 passed 00:07:58.037 Test: lvol_close ...[2024-07-25 18:34:58.469093] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:58.037 [2024-07-25 18:34:58.469254] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:58.037 passed 00:07:58.037 Test: lvol_resize ...passed 00:07:58.037 Test: lvol_set_read_only ...passed 00:07:58.037 Test: test_lvs_load ...[2024-07-25 18:34:58.470494] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:58.037 [2024-07-25 18:34:58.470656] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:58.037 passed 00:07:58.037 Test: lvols_load ...[2024-07-25 18:34:58.471086] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:58.037 [2024-07-25 18:34:58.471376] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:58.037 passed 00:07:58.037 Test: lvol_open ...passed 00:07:58.037 Test: lvol_snapshot ...passed 00:07:58.037 Test: lvol_snapshot_fail ...[2024-07-25 18:34:58.472396] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:58.037 passed 00:07:58.037 Test: lvol_clone ...passed 00:07:58.037 Test: lvol_clone_fail ...[2024-07-25 18:34:58.473212] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:58.037 passed 00:07:58.037 Test: lvol_iter_clones ...passed 00:07:58.037 Test: lvol_refcnt ...[2024-07-25 18:34:58.474018] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol acbc4957-259d-43df-a50a-ee1e21d8091f because it is still open 00:07:58.037 passed 00:07:58.037 Test: lvol_names ...[2024-07-25 18:34:58.474386] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:58.037 [2024-07-25 18:34:58.474606] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:58.037 [2024-07-25 18:34:58.475029] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:58.037 passed 00:07:58.037 Test: lvol_create_thin_provisioned ...passed 00:07:58.037 Test: lvol_rename ...[2024-07-25 18:34:58.475665] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:58.037 [2024-07-25 18:34:58.475904] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:58.037 passed 00:07:58.037 Test: lvs_rename ...[2024-07-25 18:34:58.476361] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:58.037 passed 00:07:58.037 Test: lvol_inflate ...[2024-07-25 18:34:58.476771] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:58.037 passed 00:07:58.037 Test: lvol_decouple_parent ...[2024-07-25 18:34:58.477192] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:58.037 passed 00:07:58.037 Test: lvol_get_xattr ...passed 00:07:58.037 Test: lvol_esnap_reload ...passed 00:07:58.037 Test: lvol_esnap_create_bad_args ...[2024-07-25 18:34:58.477859] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:58.037 [2024-07-25 18:34:58.478024] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:58.037 [2024-07-25 18:34:58.478219] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:58.037 [2024-07-25 18:34:58.478460] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:58.037 [2024-07-25 18:34:58.478718] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:58.037 passed 00:07:58.037 Test: lvol_esnap_create_delete ...passed 00:07:58.037 Test: lvol_esnap_load_esnaps ...[2024-07-25 18:34:58.479193] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:58.037 passed 00:07:58.037 Test: lvol_esnap_missing ...[2024-07-25 18:34:58.479491] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:58.037 [2024-07-25 18:34:58.479648] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:58.037 passed 00:07:58.037 Test: lvol_esnap_hotplug ... 00:07:58.037 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:58.037 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:58.037 [2024-07-25 18:34:58.480498] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a090a5b8-761b-40c0-9363-7ddf3af87f7b: failed to create esnap bs_dev: error -12 00:07:58.037 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:58.037 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:58.037 [2024-07-25 18:34:58.480956] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol b9cd6de4-c425-4b04-b1cb-8455b2915bc9: failed to create esnap bs_dev: error -12 00:07:58.037 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:58.037 [2024-07-25 18:34:58.481206] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 20820bfc-49e3-4a57-902d-43c3580429b6: failed to create esnap bs_dev: error -12 00:07:58.037 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:58.037 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:58.037 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:58.037 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:58.037 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:58.037 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:58.037 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:58.037 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:58.037 passed 00:07:58.037 Test: lvol_get_by ...passed 00:07:58.037 Test: lvol_shallow_copy ...[2024-07-25 18:34:58.482545] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:07:58.037 [2024-07-25 18:34:58.482708] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol b475b1a8-c741-4bf1-9766-9b52884fa5f9 shallow copy, ext_dev must not be NULL 00:07:58.037 passed 00:07:58.037 Test: lvol_set_parent ...[2024-07-25 18:34:58.483118] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:07:58.037 [2024-07-25 18:34:58.483277] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:07:58.037 passed 00:07:58.037 Test: lvol_set_external_parent ...[2024-07-25 18:34:58.483729] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:07:58.037 [2024-07-25 18:34:58.483887] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:07:58.037 [2024-07-25 18:34:58.484075] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:07:58.037 passed 00:07:58.037 00:07:58.037 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.037 suites 1 1 n/a 0 0 00:07:58.037 tests 37 37 37 0 0 00:07:58.037 asserts 1505 1505 1505 0 n/a 00:07:58.037 00:07:58.037 Elapsed time = 0.015 seconds 00:07:58.037 00:07:58.037 real 0m0.073s 00:07:58.037 user 0m0.034s 00:07:58.037 sys 0m0.034s 00:07:58.037 18:34:58 unittest.unittest_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.037 18:34:58 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.037 ************************************ 00:07:58.037 END TEST unittest_lvol 00:07:58.037 ************************************ 00:07:58.037 18:34:58 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:58.037 18:34:58 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:58.037 18:34:58 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.037 18:34:58 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.037 18:34:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.037 ************************************ 00:07:58.037 START TEST unittest_nvme_rdma 00:07:58.037 ************************************ 00:07:58.037 18:34:58 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:58.037 00:07:58.037 00:07:58.037 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.037 http://cunit.sourceforge.net/ 00:07:58.037 00:07:58.037 00:07:58.037 Suite: nvme_rdma 00:07:58.037 Test: test_nvme_rdma_build_sgl_request ...[2024-07-25 18:34:58.599469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:58.037 [2024-07-25 18:34:58.599978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:58.037 [2024-07-25 18:34:58.600204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:58.037 passed 00:07:58.037 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:58.037 Test: test_nvme_rdma_build_contig_request ...[2024-07-25 18:34:58.600696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:58.037 passed 00:07:58.037 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:58.037 Test: test_nvme_rdma_create_reqs ...[2024-07-25 18:34:58.601279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:58.037 passed 00:07:58.037 Test: test_nvme_rdma_create_rsps ...[2024-07-25 18:34:58.601977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:58.037 passed 00:07:58.037 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-25 18:34:58.602481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:58.037 [2024-07-25 18:34:58.602684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:58.037 passed 00:07:58.037 Test: test_nvme_rdma_poller_create ...passed 00:07:58.038 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-25 18:34:58.603324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:58.038 passed 00:07:58.038 Test: test_nvme_rdma_ctrlr_construct ...passed 00:07:58.038 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:58.038 Test: test_nvme_rdma_req_init ...passed 00:07:58.038 Test: test_nvme_rdma_validate_cm_event ...[2024-07-25 18:34:58.604368] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:58.038 [2024-07-25 18:34:58.604520] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:58.038 passed 00:07:58.038 Test: test_nvme_rdma_qpair_init ...passed 00:07:58.038 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:58.038 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:58.038 Test: test_rdma_get_memory_translation ...[2024-07-25 18:34:58.605255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:58.038 [2024-07-25 18:34:58.605440] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:58.038 passed 00:07:58.038 Test: test_get_rdma_qpair_from_wc ...passed 00:07:58.038 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:58.038 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-25 18:34:58.606106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:58.038 [2024-07-25 18:34:58.606260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:58.038 passed 00:07:58.038 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-25 18:34:58.606578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:58.038 [2024-07-25 18:34:58.606744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:58.038 [2024-07-25 18:34:58.606925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd93fcf240 on poll group 0x60c000000040 00:07:58.038 [2024-07-25 18:34:58.607083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:58.038 [2024-07-25 18:34:58.607272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:58.038 [2024-07-25 18:34:58.607470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd93fcf240 on poll group 0x60c000000040 00:07:58.038 [2024-07-25 18:34:58.607693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:58.297 passed 00:07:58.297 00:07:58.297 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.297 suites 1 1 n/a 0 0 00:07:58.297 tests 21 21 21 0 0 00:07:58.297 asserts 397 397 397 0 n/a 00:07:58.297 00:07:58.297 Elapsed time = 0.004 seconds 00:07:58.297 00:07:58.297 real 0m0.050s 00:07:58.297 user 0m0.014s 00:07:58.297 sys 0m0.031s 00:07:58.297 18:34:58 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.297 18:34:58 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 ************************************ 00:07:58.297 END TEST unittest_nvme_rdma 00:07:58.297 ************************************ 00:07:58.297 18:34:58 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:58.297 18:34:58 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.297 18:34:58 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.297 18:34:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.297 ************************************ 00:07:58.297 START TEST unittest_nvmf_transport 00:07:58.297 ************************************ 00:07:58.297 18:34:58 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:58.297 00:07:58.297 00:07:58.297 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.297 http://cunit.sourceforge.net/ 00:07:58.297 00:07:58.297 00:07:58.297 Suite: nvmf 00:07:58.297 Test: test_spdk_nvmf_transport_create ...[2024-07-25 18:34:58.725998] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:58.297 [2024-07-25 18:34:58.726870] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:58.297 [2024-07-25 18:34:58.727089] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:58.297 [2024-07-25 18:34:58.727412] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:58.297 passed 00:07:58.297 Test: test_nvmf_transport_poll_group_create ...passed 00:07:58.297 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-25 18:34:58.727866] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:58.297 [2024-07-25 18:34:58.728086] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:58.297 [2024-07-25 18:34:58.728252] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:58.297 passed 00:07:58.297 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:07:58.297 00:07:58.297 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.297 suites 1 1 n/a 0 0 00:07:58.297 tests 4 4 4 0 0 00:07:58.297 asserts 49 49 49 0 n/a 00:07:58.297 00:07:58.297 Elapsed time = 0.002 seconds 00:07:58.297 00:07:58.297 real 0m0.055s 00:07:58.297 user 0m0.032s 00:07:58.297 sys 0m0.022s 00:07:58.297 18:34:58 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.298 18:34:58 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:07:58.298 ************************************ 00:07:58.298 END TEST unittest_nvmf_transport 00:07:58.298 ************************************ 00:07:58.298 18:34:58 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:58.298 18:34:58 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.298 18:34:58 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.298 18:34:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.298 ************************************ 00:07:58.298 START TEST unittest_rdma 00:07:58.298 ************************************ 00:07:58.298 18:34:58 unittest.unittest_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:58.298 00:07:58.298 00:07:58.298 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.298 http://cunit.sourceforge.net/ 00:07:58.298 00:07:58.298 00:07:58.298 Suite: rdma_common 00:07:58.298 Test: test_spdk_rdma_pd ...[2024-07-25 18:34:58.844133] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:07:58.298 passed 00:07:58.298 00:07:58.298 [2024-07-25 18:34:58.844575] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:07:58.298 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.298 suites 1 1 n/a 0 0 00:07:58.298 tests 1 1 1 0 0 00:07:58.298 asserts 31 31 31 0 n/a 00:07:58.298 00:07:58.298 Elapsed time = 0.001 seconds 00:07:58.298 00:07:58.298 real 0m0.041s 00:07:58.298 user 0m0.018s 00:07:58.298 sys 0m0.023s 00:07:58.298 18:34:58 unittest.unittest_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.298 18:34:58 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:58.298 ************************************ 00:07:58.298 END TEST unittest_rdma 00:07:58.298 ************************************ 00:07:58.557 18:34:58 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:58.557 18:34:58 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:58.557 18:34:58 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.557 18:34:58 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.557 18:34:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:58.557 ************************************ 00:07:58.557 START TEST unittest_nvme_cuse 00:07:58.557 ************************************ 00:07:58.557 18:34:58 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:58.557 00:07:58.557 00:07:58.557 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.557 http://cunit.sourceforge.net/ 00:07:58.557 00:07:58.557 00:07:58.557 Suite: nvme_cuse 00:07:58.557 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:58.557 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:58.557 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:58.557 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:58.557 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:58.557 Test: test_cuse_nvme_submit_io ...[2024-07-25 18:34:58.963093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:58.557 passed 00:07:58.557 Test: test_cuse_nvme_reset ...[2024-07-25 18:34:58.963641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:58.557 passed 00:07:59.493 Test: test_nvme_cuse_stop ...passed 00:07:59.493 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:59.493 00:07:59.493 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.493 suites 1 1 n/a 0 0 00:07:59.493 tests 9 9 9 0 0 00:07:59.493 asserts 118 118 118 0 n/a 00:07:59.493 00:07:59.493 Elapsed time = 1.006 seconds 00:07:59.493 00:07:59.493 real 0m1.049s 00:07:59.493 user 0m0.502s 00:07:59.493 sys 0m0.547s 00:07:59.493 18:34:59 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.493 18:34:59 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:07:59.493 ************************************ 00:07:59.493 END TEST unittest_nvme_cuse 00:07:59.493 ************************************ 00:07:59.493 18:35:00 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:07:59.493 18:35:00 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.493 18:35:00 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.493 18:35:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:59.493 ************************************ 00:07:59.493 START TEST unittest_nvmf 00:07:59.493 ************************************ 00:07:59.493 18:35:00 unittest.unittest_nvmf -- common/autotest_common.sh@1125 -- # unittest_nvmf 00:07:59.493 18:35:00 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:59.753 00:07:59.754 00:07:59.754 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.754 http://cunit.sourceforge.net/ 00:07:59.754 00:07:59.754 00:07:59.754 Suite: nvmf 00:07:59.754 Test: test_get_log_page ...[2024-07-25 18:35:00.087741] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:59.754 passed 00:07:59.754 Test: test_process_fabrics_cmd ...[2024-07-25 18:35:00.088155] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:07:59.754 passed 00:07:59.754 Test: test_connect ...[2024-07-25 18:35:00.088863] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:59.754 [2024-07-25 18:35:00.088997] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:59.754 [2024-07-25 18:35:00.089047] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:59.754 [2024-07-25 18:35:00.089107] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:59.754 [2024-07-25 18:35:00.089216] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:59.754 [2024-07-25 18:35:00.089291] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:59.754 [2024-07-25 18:35:00.089339] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:59.754 [2024-07-25 18:35:00.089396] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:59.754 [2024-07-25 18:35:00.089522] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:59.754 [2024-07-25 18:35:00.089620] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:59.754 [2024-07-25 18:35:00.089997] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:59.754 [2024-07-25 18:35:00.090115] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:59.754 [2024-07-25 18:35:00.090208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:59.754 [2024-07-25 18:35:00.090298] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:59.754 [2024-07-25 18:35:00.090415] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:07:59.754 [2024-07-25 18:35:00.090599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:07:59.754 passed 00:07:59.754 Test: test_get_ns_id_desc_list ...[2024-07-25 18:35:00.090676] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:07:59.754 passed 00:07:59.754 Test: test_identify_ns ...[2024-07-25 18:35:00.090974] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:59.754 [2024-07-25 18:35:00.091271] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:59.754 [2024-07-25 18:35:00.091410] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:59.754 passed 00:07:59.754 Test: test_identify_ns_iocs_specific ...[2024-07-25 18:35:00.091579] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:59.754 [2024-07-25 18:35:00.091879] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:59.754 passed 00:07:59.754 Test: test_reservation_write_exclusive ...passed 00:07:59.754 Test: test_reservation_exclusive_access ...passed 00:07:59.754 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:59.754 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:59.754 Test: test_reservation_notification_log_page ...passed 00:07:59.754 Test: test_get_dif_ctx ...passed 00:07:59.754 Test: test_set_get_features ...[2024-07-25 18:35:00.092569] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:59.754 [2024-07-25 18:35:00.092660] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:59.754 [2024-07-25 18:35:00.092715] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:59.754 passed 00:07:59.754 Test: test_identify_ctrlr ...passed 00:07:59.754 Test: test_identify_ctrlr_iocs_specific ...[2024-07-25 18:35:00.092761] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:59.754 passed 00:07:59.754 Test: test_custom_admin_cmd ...passed 00:07:59.754 Test: test_fused_compare_and_write ...[2024-07-25 18:35:00.093255] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4249:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:59.754 [2024-07-25 18:35:00.093308] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:59.754 passed 00:07:59.754 Test: test_multi_async_event_reqs ...passed 00:07:59.754 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:59.754 Test: test_get_ana_log_page_multi_ns_per_anagrp ...[2024-07-25 18:35:00.093368] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4256:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:59.754 passed 00:07:59.754 Test: test_multi_async_events ...passed 00:07:59.754 Test: test_rae ...passed 00:07:59.754 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:59.754 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:59.754 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-25 18:35:00.094011] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:07:59.754 passed 00:07:59.754 Test: test_zcopy_read ...passed 00:07:59.754 Test: test_zcopy_write ...[2024-07-25 18:35:00.094083] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:07:59.754 passed 00:07:59.754 Test: test_nvmf_property_set ...passed 00:07:59.754 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-25 18:35:00.094265] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:59.754 [2024-07-25 18:35:00.094306] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:59.754 passed 00:07:59.754 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-25 18:35:00.094361] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1970:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:59.754 [2024-07-25 18:35:00.094399] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1976:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:59.754 passed 00:07:59.754 Test: test_nvmf_ctrlr_ns_attachment ...[2024-07-25 18:35:00.094486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:59.754 [2024-07-25 18:35:00.094530] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:59.754 passed 00:07:59.754 Test: test_nvmf_check_qpair_active ...[2024-07-25 18:35:00.094706] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4741:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:07:59.754 [2024-07-25 18:35:00.094755] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4755:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:07:59.754 [2024-07-25 18:35:00.094810] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:07:59.754 [2024-07-25 18:35:00.094857] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:07:59.754 passed 00:07:59.754 00:07:59.754 [2024-07-25 18:35:00.094918] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4767:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:07:59.754 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.754 suites 1 1 n/a 0 0 00:07:59.754 tests 32 32 32 0 0 00:07:59.754 asserts 983 983 983 0 n/a 00:07:59.754 00:07:59.754 Elapsed time = 0.007 seconds 00:07:59.754 18:35:00 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:59.754 00:07:59.754 00:07:59.754 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.754 http://cunit.sourceforge.net/ 00:07:59.754 00:07:59.754 00:07:59.754 Suite: nvmf 00:07:59.754 Test: test_get_rw_params ...passed 00:07:59.754 Test: test_get_rw_ext_params ...passed 00:07:59.754 Test: test_lba_in_range ...passed 00:07:59.754 Test: test_get_dif_ctx ...passed 00:07:59.754 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:59.754 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-25 18:35:00.144826] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:59.754 passed 00:07:59.754 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-25 18:35:00.145201] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:59.754 [2024-07-25 18:35:00.145315] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:59.754 [2024-07-25 18:35:00.145381] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:59.754 passed 00:07:59.754 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-25 18:35:00.145476] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:59.755 [2024-07-25 18:35:00.145614] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:59.755 [2024-07-25 18:35:00.145659] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:59.755 [2024-07-25 18:35:00.145751] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:59.755 passed 00:07:59.755 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:59.755 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:59.755 00:07:59.755 [2024-07-25 18:35:00.145809] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:59.755 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.755 suites 1 1 n/a 0 0 00:07:59.755 tests 10 10 10 0 0 00:07:59.755 asserts 159 159 159 0 n/a 00:07:59.755 00:07:59.755 Elapsed time = 0.001 seconds 00:07:59.755 18:35:00 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:59.755 00:07:59.755 00:07:59.755 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.755 http://cunit.sourceforge.net/ 00:07:59.755 00:07:59.755 00:07:59.755 Suite: nvmf 00:07:59.755 Test: test_discovery_log ...passed 00:07:59.755 Test: test_discovery_log_with_filters ...passed 00:07:59.755 00:07:59.755 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.755 suites 1 1 n/a 0 0 00:07:59.755 tests 2 2 2 0 0 00:07:59.755 asserts 238 238 238 0 n/a 00:07:59.755 00:07:59.755 Elapsed time = 0.003 seconds 00:07:59.755 18:35:00 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:59.755 00:07:59.755 00:07:59.755 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.755 http://cunit.sourceforge.net/ 00:07:59.755 00:07:59.755 00:07:59.755 Suite: nvmf 00:07:59.755 Test: nvmf_test_create_subsystem ...[2024-07-25 18:35:00.243388] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:59.755 [2024-07-25 18:35:00.243685] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:07:59.755 [2024-07-25 18:35:00.243864] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:59.755 [2024-07-25 18:35:00.243989] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:07:59.755 [2024-07-25 18:35:00.244037] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:59.755 [2024-07-25 18:35:00.244105] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:07:59.755 [2024-07-25 18:35:00.244203] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:59.755 [2024-07-25 18:35:00.244259] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:07:59.755 [2024-07-25 18:35:00.244301] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:59.755 [2024-07-25 18:35:00.244349] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:07:59.755 [2024-07-25 18:35:00.244393] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:59.755 [2024-07-25 18:35:00.244444] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:07:59.755 [2024-07-25 18:35:00.244574] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:59.755 [2024-07-25 18:35:00.244696] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:07:59.755 [2024-07-25 18:35:00.244808] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:59.755 [2024-07-25 18:35:00.244863] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:07:59.755 [2024-07-25 18:35:00.244974] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:59.755 [2024-07-25 18:35:00.245024] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:07:59.755 [2024-07-25 18:35:00.245069] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:59.755 [2024-07-25 18:35:00.245132] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:59.755 [2024-07-25 18:35:00.245181] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:59.755 passed 00:07:59.755 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-25 18:35:00.245228] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:59.755 [2024-07-25 18:35:00.245434] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:59.755 [2024-07-25 18:35:00.245479] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:59.755 passed 00:07:59.755 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-25 18:35:00.245805] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2161:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:07:59.755 passed 00:07:59.755 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:59.755 Test: test_spdk_nvmf_ns_visible ...[2024-07-25 18:35:00.246062] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:07:59.755 passed 00:07:59.755 Test: test_reservation_register ...[2024-07-25 18:35:00.246543] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 [2024-07-25 18:35:00.246676] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:59.755 passed 00:07:59.755 Test: test_reservation_register_with_ptpl ...passed 00:07:59.755 Test: test_reservation_acquire_preempt_1 ...[2024-07-25 18:35:00.247807] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 passed 00:07:59.755 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:59.755 Test: test_reservation_release ...[2024-07-25 18:35:00.249630] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 passed 00:07:59.755 Test: test_reservation_unregister_notification ...[2024-07-25 18:35:00.250084] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 passed 00:07:59.755 Test: test_reservation_release_notification ...[2024-07-25 18:35:00.250326] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 passed 00:07:59.755 Test: test_reservation_release_notification_write_exclusive ...[2024-07-25 18:35:00.250567] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 passed 00:07:59.755 Test: test_reservation_clear_notification ...[2024-07-25 18:35:00.250835] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 passed 00:07:59.755 Test: test_reservation_preempt_notification ...[2024-07-25 18:35:00.251127] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3106:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:59.755 passed 00:07:59.755 Test: test_spdk_nvmf_ns_event ...passed 00:07:59.755 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:59.755 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:59.755 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-25 18:35:00.251997] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:59.755 passed 00:07:59.755 Test: test_nvmf_ns_reservation_report ...[2024-07-25 18:35:00.252090] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:07:59.755 [2024-07-25 18:35:00.252224] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3469:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:59.755 passed 00:07:59.755 Test: test_nvmf_nqn_is_valid ...[2024-07-25 18:35:00.252316] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:59.755 [2024-07-25 18:35:00.252393] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:4489fde1-24ec-4f0e-9b09-de54f999ce0": uuid is not the correct length 00:07:59.755 [2024-07-25 18:35:00.252433] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:59.755 passed 00:07:59.755 Test: test_nvmf_ns_reservation_restore ...passed 00:07:59.755 Test: test_nvmf_subsystem_state_change ...[2024-07-25 18:35:00.252570] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:59.755 passed 00:07:59.755 Test: test_nvmf_reservation_custom_ops ...passed 00:07:59.755 00:07:59.755 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.756 suites 1 1 n/a 0 0 00:07:59.756 tests 24 24 24 0 0 00:07:59.756 asserts 499 499 499 0 n/a 00:07:59.756 00:07:59.756 Elapsed time = 0.010 seconds 00:07:59.756 18:35:00 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:59.756 00:07:59.756 00:07:59.756 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.756 http://cunit.sourceforge.net/ 00:07:59.756 00:07:59.756 00:07:59.756 Suite: nvmf 00:08:00.016 Test: test_nvmf_tcp_create ...[2024-07-25 18:35:00.335207] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 750:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:00.016 passed 00:08:00.016 Test: test_nvmf_tcp_destroy ...passed 00:08:00.016 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:00.016 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:00.016 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:00.016 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:00.016 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:00.016 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-25 18:35:00.471208] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.471327] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.471447] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.471505] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.471543] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 passed 00:08:00.016 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:00.016 Test: test_nvmf_tcp_icreq_handle ...[2024-07-25 18:35:00.471659] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:00.016 [2024-07-25 18:35:00.471768] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.471848] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.471894] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:00.016 [2024-07-25 18:35:00.471949] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.471995] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.472047] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.472104] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:00.016 passed 00:08:00.016 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:00.016 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-25 18:35:00.472179] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.472263] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2563:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:00.016 [2024-07-25 18:35:00.472321] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 passed 00:08:00.016 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-25 18:35:00.472372] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c714e30 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.472432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2295:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc1c715b90 00:08:00.016 [2024-07-25 18:35:00.472542] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.472611] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.472672] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2352:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc1c7152f0 00:08:00.016 [2024-07-25 18:35:00.472721] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.472765] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.472811] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2305:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:00.016 [2024-07-25 18:35:00.472853] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.472917] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.472976] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2344:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:00.016 [2024-07-25 18:35:00.473015] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.473076] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.473124] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.473169] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.473245] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.473281] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.473347] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.473393] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.473448] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.473483] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 [2024-07-25 18:35:00.473555] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.473602] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 passed 00:08:00.016 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-25 18:35:00.473664] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1126:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:00.016 [2024-07-25 18:35:00.473708] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1c7152f0 is same with the state(5) to be set 00:08:00.016 passed 00:08:00.016 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-25 18:35:00.504021] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:00.016 passed 00:08:00.016 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:08:00.016 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:08:00.016 00:08:00.016 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.016 suites 1 1 n/a 0 0 00:08:00.016 tests 17 17 17 0 0 00:08:00.016 asserts 222 222 222 0 n/a 00:08:00.016 00:08:00.016 Elapsed time = 0.201 seconds 00:08:00.016 [2024-07-25 18:35:00.504123] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:00.016 [2024-07-25 18:35:00.504566] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:00.016 [2024-07-25 18:35:00.504633] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:00.016 [2024-07-25 18:35:00.504885] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:00.016 [2024-07-25 18:35:00.504959] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:00.275 18:35:00 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:00.275 00:08:00.275 00:08:00.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.275 http://cunit.sourceforge.net/ 00:08:00.275 00:08:00.275 00:08:00.275 Suite: nvmf 00:08:00.275 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:00.275 00:08:00.275 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.275 suites 1 1 n/a 0 0 00:08:00.275 tests 1 1 1 0 0 00:08:00.275 asserts 17 17 17 0 n/a 00:08:00.275 00:08:00.275 Elapsed time = 0.029 seconds 00:08:00.275 00:08:00.275 real 0m0.665s 00:08:00.275 user 0m0.293s 00:08:00.275 sys 0m0.372s 00:08:00.275 18:35:00 unittest.unittest_nvmf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.275 ************************************ 00:08:00.275 END TEST unittest_nvmf 00:08:00.275 18:35:00 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:08:00.275 ************************************ 00:08:00.276 18:35:00 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:00.276 18:35:00 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:00.276 18:35:00 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:00.276 18:35:00 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.276 18:35:00 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.276 18:35:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.276 ************************************ 00:08:00.276 START TEST unittest_nvmf_rdma 00:08:00.276 ************************************ 00:08:00.276 18:35:00 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:00.276 00:08:00.276 00:08:00.276 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.276 http://cunit.sourceforge.net/ 00:08:00.276 00:08:00.276 00:08:00.276 Suite: nvmf 00:08:00.276 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-25 18:35:00.832728] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:00.276 [2024-07-25 18:35:00.833107] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:00.276 [2024-07-25 18:35:00.833178] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:00.276 passed 00:08:00.276 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:00.276 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:00.276 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:00.276 Test: test_nvmf_rdma_opts_init ...passed 00:08:00.276 Test: test_nvmf_rdma_request_free_data ...passed 00:08:00.276 Test: test_nvmf_rdma_resources_create ...passed 00:08:00.276 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:00.276 Test: test_nvmf_rdma_resize_cq ...[2024-07-25 18:35:00.836280] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:00.276 Using CQ of insufficient size may lead to CQ overrun 00:08:00.276 [2024-07-25 18:35:00.836407] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:00.276 passed 00:08:00.276 00:08:00.276 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.276 suites 1 1 n/a 0 0 00:08:00.276 tests 9 9 9 0 0 00:08:00.276 asserts 579 579 579 0 n/a 00:08:00.276 00:08:00.276 Elapsed time = 0.004 seconds 00:08:00.276 [2024-07-25 18:35:00.836486] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:00.535 00:08:00.535 real 0m0.058s 00:08:00.535 user 0m0.017s 00:08:00.535 sys 0m0.041s 00:08:00.535 18:35:00 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.535 18:35:00 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:00.535 ************************************ 00:08:00.535 END TEST unittest_nvmf_rdma 00:08:00.535 ************************************ 00:08:00.535 18:35:00 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:00.535 18:35:00 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:08:00.535 18:35:00 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.535 18:35:00 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.535 18:35:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.535 ************************************ 00:08:00.535 START TEST unittest_scsi 00:08:00.535 ************************************ 00:08:00.535 18:35:00 unittest.unittest_scsi -- common/autotest_common.sh@1125 -- # unittest_scsi 00:08:00.535 18:35:00 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:00.535 00:08:00.535 00:08:00.535 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.535 http://cunit.sourceforge.net/ 00:08:00.535 00:08:00.535 00:08:00.535 Suite: dev_suite 00:08:00.535 Test: dev_destruct_null_dev ...passed 00:08:00.535 Test: dev_destruct_zero_luns ...passed 00:08:00.535 Test: dev_destruct_null_lun ...passed 00:08:00.535 Test: dev_destruct_success ...passed 00:08:00.535 Test: dev_construct_num_luns_zero ...[2024-07-25 18:35:00.954037] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:00.535 passed 00:08:00.535 Test: dev_construct_no_lun_zero ...[2024-07-25 18:35:00.954426] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:00.535 passed 00:08:00.535 Test: dev_construct_null_lun ...passed 00:08:00.535 Test: dev_construct_name_too_long ...passed 00:08:00.535 Test: dev_construct_success ...passed 00:08:00.535 Test: dev_construct_success_lun_zero_not_first ...[2024-07-25 18:35:00.954489] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:00.535 [2024-07-25 18:35:00.954550] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:00.535 passed 00:08:00.535 Test: dev_queue_mgmt_task_success ...passed 00:08:00.535 Test: dev_queue_task_success ...passed 00:08:00.535 Test: dev_stop_success ...passed 00:08:00.535 Test: dev_add_port_max_ports ...[2024-07-25 18:35:00.954930] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:00.535 passed 00:08:00.535 Test: dev_add_port_construct_failure1 ...passed 00:08:00.535 Test: dev_add_port_construct_failure2 ...[2024-07-25 18:35:00.955055] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:00.535 [2024-07-25 18:35:00.955171] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:00.535 passed 00:08:00.535 Test: dev_add_port_success1 ...passed 00:08:00.535 Test: dev_add_port_success2 ...passed 00:08:00.535 Test: dev_add_port_success3 ...passed 00:08:00.535 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:00.535 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:00.535 Test: dev_find_port_by_id_success ...passed 00:08:00.535 Test: dev_add_lun_bdev_not_found ...passed 00:08:00.535 Test: dev_add_lun_no_free_lun_id ...[2024-07-25 18:35:00.955696] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:00.535 passed 00:08:00.535 Test: dev_add_lun_success1 ...passed 00:08:00.535 Test: dev_add_lun_success2 ...passed 00:08:00.535 Test: dev_check_pending_tasks ...passed 00:08:00.535 Test: dev_iterate_luns ...passed 00:08:00.535 Test: dev_find_free_lun ...passed 00:08:00.535 00:08:00.535 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.535 suites 1 1 n/a 0 0 00:08:00.535 tests 29 29 29 0 0 00:08:00.535 asserts 97 97 97 0 n/a 00:08:00.535 00:08:00.535 Elapsed time = 0.002 seconds 00:08:00.535 18:35:00 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:00.535 00:08:00.535 00:08:00.535 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.536 http://cunit.sourceforge.net/ 00:08:00.536 00:08:00.536 00:08:00.536 Suite: lun_suite 00:08:00.536 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-25 18:35:01.004880] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:00.536 passed 00:08:00.536 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-25 18:35:01.005580] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:00.536 passed 00:08:00.536 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:00.536 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:00.536 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-25 18:35:01.006061] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:00.536 passed 00:08:00.536 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:00.536 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:00.536 Test: lun_append_task_null_lun_not_supported ...passed 00:08:00.536 Test: lun_execute_scsi_task_pending ...passed 00:08:00.536 Test: lun_execute_scsi_task_complete ...passed 00:08:00.536 Test: lun_execute_scsi_task_resize ...passed 00:08:00.536 Test: lun_destruct_success ...passed 00:08:00.536 Test: lun_construct_null_ctx ...[2024-07-25 18:35:01.007180] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:00.536 passed 00:08:00.536 Test: lun_construct_success ...passed 00:08:00.536 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:00.536 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:00.536 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:00.536 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:00.536 00:08:00.536 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.536 suites 1 1 n/a 0 0 00:08:00.536 tests 18 18 18 0 0 00:08:00.536 asserts 153 153 153 0 n/a 00:08:00.536 00:08:00.536 Elapsed time = 0.003 seconds 00:08:00.536 18:35:01 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:00.536 00:08:00.536 00:08:00.536 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.536 http://cunit.sourceforge.net/ 00:08:00.536 00:08:00.536 00:08:00.536 Suite: scsi_suite 00:08:00.536 Test: scsi_init ...passed 00:08:00.536 00:08:00.536 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.536 suites 1 1 n/a 0 0 00:08:00.536 tests 1 1 1 0 0 00:08:00.536 asserts 1 1 1 0 n/a 00:08:00.536 00:08:00.536 Elapsed time = 0.000 seconds 00:08:00.536 18:35:01 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:00.536 00:08:00.536 00:08:00.536 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.536 http://cunit.sourceforge.net/ 00:08:00.536 00:08:00.536 00:08:00.536 Suite: translation_suite 00:08:00.536 Test: mode_select_6_test ...passed 00:08:00.536 Test: mode_select_6_test2 ...passed 00:08:00.536 Test: mode_sense_6_test ...passed 00:08:00.536 Test: mode_sense_10_test ...passed 00:08:00.536 Test: inquiry_evpd_test ...passed 00:08:00.536 Test: inquiry_standard_test ...passed 00:08:00.536 Test: inquiry_overflow_test ...passed 00:08:00.536 Test: task_complete_test ...passed 00:08:00.536 Test: lba_range_test ...passed 00:08:00.536 Test: xfer_len_test ...[2024-07-25 18:35:01.103926] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:00.536 passed 00:08:00.536 Test: xfer_test ...passed 00:08:00.536 Test: scsi_name_padding_test ...passed 00:08:00.536 Test: get_dif_ctx_test ...passed 00:08:00.536 Test: unmap_split_test ...passed 00:08:00.536 00:08:00.536 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.536 suites 1 1 n/a 0 0 00:08:00.536 tests 14 14 14 0 0 00:08:00.536 asserts 1205 1205 1205 0 n/a 00:08:00.536 00:08:00.536 Elapsed time = 0.005 seconds 00:08:00.796 18:35:01 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:00.796 00:08:00.796 00:08:00.796 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.796 http://cunit.sourceforge.net/ 00:08:00.796 00:08:00.796 00:08:00.796 Suite: reservation_suite 00:08:00.796 Test: test_reservation_register ...[2024-07-25 18:35:01.143877] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 passed 00:08:00.796 Test: test_reservation_reserve ...[2024-07-25 18:35:01.144300] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 [2024-07-25 18:35:01.144387] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:00.796 [2024-07-25 18:35:01.144501] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:00.796 passed 00:08:00.796 Test: test_all_registrant_reservation_reserve ...passed 00:08:00.796 Test: test_all_registrant_reservation_access ...[2024-07-25 18:35:01.144569] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 [2024-07-25 18:35:01.144702] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 passed 00:08:00.796 Test: test_reservation_preempt_non_all_regs ...[2024-07-25 18:35:01.144778] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:08:00.796 [2024-07-25 18:35:01.144842] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:08:00.796 [2024-07-25 18:35:01.144918] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 passed 00:08:00.796 Test: test_reservation_preempt_all_regs ...[2024-07-25 18:35:01.144993] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:00.796 [2024-07-25 18:35:01.145137] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 passed 00:08:00.796 Test: test_reservation_cmds_conflict ...[2024-07-25 18:35:01.145277] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 [2024-07-25 18:35:01.145357] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:00.796 [2024-07-25 18:35:01.145425] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:00.796 [2024-07-25 18:35:01.145466] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:00.796 [2024-07-25 18:35:01.145507] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:00.796 [2024-07-25 18:35:01.145550] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:00.796 passed 00:08:00.796 Test: test_scsi2_reserve_release ...passed 00:08:00.796 Test: test_pr_with_scsi2_reserve_release ...[2024-07-25 18:35:01.145643] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:00.796 passed 00:08:00.796 00:08:00.796 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.796 suites 1 1 n/a 0 0 00:08:00.796 tests 9 9 9 0 0 00:08:00.796 asserts 344 344 344 0 n/a 00:08:00.796 00:08:00.796 Elapsed time = 0.002 seconds 00:08:00.796 00:08:00.796 real 0m0.233s 00:08:00.796 user 0m0.113s 00:08:00.796 sys 0m0.121s 00:08:00.796 18:35:01 unittest.unittest_scsi -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.796 18:35:01 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:08:00.796 ************************************ 00:08:00.796 END TEST unittest_scsi 00:08:00.796 ************************************ 00:08:00.796 18:35:01 unittest -- unit/unittest.sh@278 -- # uname -s 00:08:00.796 18:35:01 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:08:00.796 18:35:01 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:08:00.796 18:35:01 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.796 18:35:01 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.796 18:35:01 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:00.796 ************************************ 00:08:00.796 START TEST unittest_sock 00:08:00.796 ************************************ 00:08:00.796 18:35:01 unittest.unittest_sock -- common/autotest_common.sh@1125 -- # unittest_sock 00:08:00.796 18:35:01 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:00.796 00:08:00.796 00:08:00.796 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.796 http://cunit.sourceforge.net/ 00:08:00.796 00:08:00.796 00:08:00.796 Suite: sock 00:08:00.796 Test: posix_sock ...passed 00:08:00.796 Test: ut_sock ...passed 00:08:00.796 Test: posix_sock_group ...passed 00:08:00.796 Test: ut_sock_group ...passed 00:08:00.796 Test: posix_sock_group_fairness ...passed 00:08:00.796 Test: _posix_sock_close ...passed 00:08:00.796 Test: sock_get_default_opts ...passed 00:08:00.796 Test: ut_sock_impl_get_set_opts ...passed 00:08:00.796 Test: posix_sock_impl_get_set_opts ...passed 00:08:00.796 Test: ut_sock_map ...passed 00:08:00.796 Test: override_impl_opts ...passed 00:08:00.796 Test: ut_sock_group_get_ctx ...passed 00:08:00.796 Test: posix_get_interface_name ...passed 00:08:00.796 00:08:00.796 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.796 suites 1 1 n/a 0 0 00:08:00.796 tests 13 13 13 0 0 00:08:00.796 asserts 360 360 360 0 n/a 00:08:00.796 00:08:00.796 Elapsed time = 0.011 seconds 00:08:00.796 18:35:01 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:00.796 00:08:00.796 00:08:00.796 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.796 http://cunit.sourceforge.net/ 00:08:00.796 00:08:00.796 00:08:00.796 Suite: posix 00:08:00.796 Test: flush ...passed 00:08:00.796 00:08:00.796 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.796 suites 1 1 n/a 0 0 00:08:00.796 tests 1 1 1 0 0 00:08:00.796 asserts 28 28 28 0 n/a 00:08:00.796 00:08:00.796 Elapsed time = 0.000 seconds 00:08:01.057 18:35:01 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:01.057 00:08:01.057 real 0m0.135s 00:08:01.057 user 0m0.050s 00:08:01.057 sys 0m0.061s 00:08:01.057 18:35:01 unittest.unittest_sock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.057 18:35:01 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:08:01.057 ************************************ 00:08:01.057 END TEST unittest_sock 00:08:01.057 ************************************ 00:08:01.057 18:35:01 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:01.057 18:35:01 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.057 18:35:01 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.057 18:35:01 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:01.057 ************************************ 00:08:01.057 START TEST unittest_thread 00:08:01.057 ************************************ 00:08:01.057 18:35:01 unittest.unittest_thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:01.057 00:08:01.057 00:08:01.057 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.057 http://cunit.sourceforge.net/ 00:08:01.057 00:08:01.057 00:08:01.057 Suite: io_channel 00:08:01.057 Test: thread_alloc ...passed 00:08:01.057 Test: thread_send_msg ...passed 00:08:01.057 Test: thread_poller ...passed 00:08:01.057 Test: poller_pause ...passed 00:08:01.057 Test: thread_for_each ...passed 00:08:01.057 Test: for_each_channel_remove ...passed 00:08:01.057 Test: for_each_channel_unreg ...[2024-07-25 18:35:01.488751] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7ffd8eb5fe90 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:01.057 passed 00:08:01.057 Test: thread_name ...passed 00:08:01.057 Test: channel ...[2024-07-25 18:35:01.493099] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x55b8023aa180 00:08:01.057 passed 00:08:01.057 Test: channel_destroy_races ...passed 00:08:01.057 Test: thread_exit_test ...[2024-07-25 18:35:01.498465] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:08:01.057 passed 00:08:01.057 Test: thread_update_stats_test ...passed 00:08:01.057 Test: nested_channel ...passed 00:08:01.057 Test: device_unregister_and_thread_exit_race ...passed 00:08:01.057 Test: cache_closest_timed_poller ...passed 00:08:01.057 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:01.057 Test: io_device_lookup ...passed 00:08:01.057 Test: spdk_spin ...[2024-07-25 18:35:01.509791] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:01.057 [2024-07-25 18:35:01.509853] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd8eb5fe80 00:08:01.057 [2024-07-25 18:35:01.509969] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:01.057 [2024-07-25 18:35:01.511738] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:01.057 [2024-07-25 18:35:01.511819] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd8eb5fe80 00:08:01.057 [2024-07-25 18:35:01.511861] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:01.057 [2024-07-25 18:35:01.511912] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd8eb5fe80 00:08:01.057 [2024-07-25 18:35:01.511946] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:01.057 [2024-07-25 18:35:01.511983] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd8eb5fe80 00:08:01.057 [2024-07-25 18:35:01.512018] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:01.057 [2024-07-25 18:35:01.512087] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffd8eb5fe80 00:08:01.057 passed 00:08:01.057 Test: for_each_channel_and_thread_exit_race ...passed 00:08:01.057 Test: for_each_thread_and_thread_exit_race ...passed 00:08:01.057 00:08:01.057 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.057 suites 1 1 n/a 0 0 00:08:01.057 tests 20 20 20 0 0 00:08:01.057 asserts 409 409 409 0 n/a 00:08:01.057 00:08:01.057 Elapsed time = 0.052 seconds 00:08:01.057 00:08:01.057 real 0m0.105s 00:08:01.057 user 0m0.078s 00:08:01.057 sys 0m0.027s 00:08:01.057 ************************************ 00:08:01.057 END TEST unittest_thread 00:08:01.057 ************************************ 00:08:01.057 18:35:01 unittest.unittest_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.057 18:35:01 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.057 18:35:01 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:01.057 18:35:01 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.057 18:35:01 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.057 18:35:01 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:01.057 ************************************ 00:08:01.057 START TEST unittest_iobuf 00:08:01.057 ************************************ 00:08:01.057 18:35:01 unittest.unittest_iobuf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:01.317 00:08:01.317 00:08:01.317 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.317 http://cunit.sourceforge.net/ 00:08:01.317 00:08:01.317 00:08:01.317 Suite: io_channel 00:08:01.317 Test: iobuf ...passed 00:08:01.317 Test: iobuf_cache ...[2024-07-25 18:35:01.654770] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:01.317 [2024-07-25 18:35:01.655281] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:01.317 [2024-07-25 18:35:01.655599] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:01.317 [2024-07-25 18:35:01.655764] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:01.317 [2024-07-25 18:35:01.655947] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:01.317 [2024-07-25 18:35:01.656090] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:01.317 passed 00:08:01.317 Test: iobuf_priority ...passed 00:08:01.317 00:08:01.317 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.317 suites 1 1 n/a 0 0 00:08:01.317 tests 3 3 3 0 0 00:08:01.317 asserts 131 131 131 0 n/a 00:08:01.317 00:08:01.317 Elapsed time = 0.009 seconds 00:08:01.317 00:08:01.317 real 0m0.063s 00:08:01.317 user 0m0.023s 00:08:01.317 sys 0m0.039s 00:08:01.317 18:35:01 unittest.unittest_iobuf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.317 18:35:01 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:08:01.317 ************************************ 00:08:01.317 END TEST unittest_iobuf 00:08:01.317 ************************************ 00:08:01.317 18:35:01 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:08:01.317 18:35:01 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.317 18:35:01 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.317 18:35:01 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:01.317 ************************************ 00:08:01.317 START TEST unittest_util 00:08:01.317 ************************************ 00:08:01.317 18:35:01 unittest.unittest_util -- common/autotest_common.sh@1125 -- # unittest_util 00:08:01.317 18:35:01 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:01.317 00:08:01.317 00:08:01.317 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.317 http://cunit.sourceforge.net/ 00:08:01.317 00:08:01.317 00:08:01.317 Suite: base64 00:08:01.317 Test: test_base64_get_encoded_strlen ...passed 00:08:01.317 Test: test_base64_get_decoded_len ...passed 00:08:01.317 Test: test_base64_encode ...passed 00:08:01.317 Test: test_base64_decode ...passed 00:08:01.317 Test: test_base64_urlsafe_encode ...passed 00:08:01.317 Test: test_base64_urlsafe_decode ...passed 00:08:01.317 00:08:01.317 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.317 suites 1 1 n/a 0 0 00:08:01.317 tests 6 6 6 0 0 00:08:01.317 asserts 112 112 112 0 n/a 00:08:01.317 00:08:01.317 Elapsed time = 0.000 seconds 00:08:01.317 18:35:01 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:01.317 00:08:01.317 00:08:01.317 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.317 http://cunit.sourceforge.net/ 00:08:01.317 00:08:01.317 00:08:01.317 Suite: bit_array 00:08:01.317 Test: test_1bit ...passed 00:08:01.317 Test: test_64bit ...passed 00:08:01.317 Test: test_find ...passed 00:08:01.317 Test: test_resize ...passed 00:08:01.317 Test: test_errors ...passed 00:08:01.317 Test: test_count ...passed 00:08:01.317 Test: test_mask_store_load ...passed 00:08:01.317 Test: test_mask_clear ...passed 00:08:01.317 00:08:01.317 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.317 suites 1 1 n/a 0 0 00:08:01.317 tests 8 8 8 0 0 00:08:01.317 asserts 5075 5075 5075 0 n/a 00:08:01.317 00:08:01.317 Elapsed time = 0.002 seconds 00:08:01.317 18:35:01 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:01.317 00:08:01.317 00:08:01.317 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.317 http://cunit.sourceforge.net/ 00:08:01.317 00:08:01.317 00:08:01.317 Suite: cpuset 00:08:01.317 Test: test_cpuset ...passed 00:08:01.317 Test: test_cpuset_parse ...[2024-07-25 18:35:01.855051] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:01.317 [2024-07-25 18:35:01.855574] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:01.317 [2024-07-25 18:35:01.855817] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:01.317 [2024-07-25 18:35:01.856049] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:01.317 [2024-07-25 18:35:01.856195] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:01.317 [2024-07-25 18:35:01.856349] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:01.317 [2024-07-25 18:35:01.856435] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:01.317 [2024-07-25 18:35:01.856573] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:01.317 passed 00:08:01.317 Test: test_cpuset_fmt ...passed 00:08:01.317 Test: test_cpuset_foreach ...passed 00:08:01.317 00:08:01.317 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.317 suites 1 1 n/a 0 0 00:08:01.317 tests 4 4 4 0 0 00:08:01.317 asserts 90 90 90 0 n/a 00:08:01.317 00:08:01.317 Elapsed time = 0.003 seconds 00:08:01.317 18:35:01 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:01.577 00:08:01.577 00:08:01.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.577 http://cunit.sourceforge.net/ 00:08:01.577 00:08:01.577 00:08:01.577 Suite: crc16 00:08:01.577 Test: test_crc16_t10dif ...passed 00:08:01.577 Test: test_crc16_t10dif_seed ...passed 00:08:01.577 Test: test_crc16_t10dif_copy ...passed 00:08:01.577 00:08:01.577 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.577 suites 1 1 n/a 0 0 00:08:01.577 tests 3 3 3 0 0 00:08:01.577 asserts 5 5 5 0 n/a 00:08:01.577 00:08:01.577 Elapsed time = 0.000 seconds 00:08:01.577 18:35:01 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:01.577 00:08:01.577 00:08:01.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.577 http://cunit.sourceforge.net/ 00:08:01.577 00:08:01.577 00:08:01.577 Suite: crc32_ieee 00:08:01.577 Test: test_crc32_ieee ...passed 00:08:01.577 00:08:01.577 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.577 suites 1 1 n/a 0 0 00:08:01.577 tests 1 1 1 0 0 00:08:01.577 asserts 1 1 1 0 n/a 00:08:01.577 00:08:01.577 Elapsed time = 0.000 seconds 00:08:01.577 18:35:01 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:01.577 00:08:01.577 00:08:01.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.577 http://cunit.sourceforge.net/ 00:08:01.577 00:08:01.577 00:08:01.577 Suite: crc32c 00:08:01.577 Test: test_crc32c ...passed 00:08:01.577 Test: test_crc32c_nvme ...passed 00:08:01.577 00:08:01.577 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.577 suites 1 1 n/a 0 0 00:08:01.577 tests 2 2 2 0 0 00:08:01.577 asserts 16 16 16 0 n/a 00:08:01.577 00:08:01.577 Elapsed time = 0.000 seconds 00:08:01.577 18:35:01 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:01.577 00:08:01.577 00:08:01.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.577 http://cunit.sourceforge.net/ 00:08:01.577 00:08:01.577 00:08:01.577 Suite: crc64 00:08:01.577 Test: test_crc64_nvme ...passed 00:08:01.577 00:08:01.577 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.577 suites 1 1 n/a 0 0 00:08:01.577 tests 1 1 1 0 0 00:08:01.577 asserts 4 4 4 0 n/a 00:08:01.577 00:08:01.577 Elapsed time = 0.000 seconds 00:08:01.577 18:35:02 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:01.577 00:08:01.577 00:08:01.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.577 http://cunit.sourceforge.net/ 00:08:01.577 00:08:01.577 00:08:01.577 Suite: string 00:08:01.577 Test: test_parse_ip_addr ...passed 00:08:01.577 Test: test_str_chomp ...passed 00:08:01.577 Test: test_parse_capacity ...passed 00:08:01.577 Test: test_sprintf_append_realloc ...passed 00:08:01.577 Test: test_strtol ...passed 00:08:01.577 Test: test_strtoll ...passed 00:08:01.577 Test: test_strarray ...passed 00:08:01.577 Test: test_strcpy_replace ...passed 00:08:01.577 00:08:01.577 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.577 suites 1 1 n/a 0 0 00:08:01.577 tests 8 8 8 0 0 00:08:01.577 asserts 161 161 161 0 n/a 00:08:01.577 00:08:01.577 Elapsed time = 0.001 seconds 00:08:01.577 18:35:02 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:01.577 00:08:01.577 00:08:01.577 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.577 http://cunit.sourceforge.net/ 00:08:01.577 00:08:01.578 00:08:01.578 Suite: dif 00:08:01.578 Test: dif_generate_and_verify_test ...[2024-07-25 18:35:02.100318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:01.578 [2024-07-25 18:35:02.101048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:01.578 [2024-07-25 18:35:02.101486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:01.578 [2024-07-25 18:35:02.101917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:01.578 [2024-07-25 18:35:02.102399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:01.578 [2024-07-25 18:35:02.102832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:01.578 passed 00:08:01.578 Test: dif_disable_check_test ...[2024-07-25 18:35:02.104173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:01.578 [2024-07-25 18:35:02.104618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:01.578 [2024-07-25 18:35:02.105032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:01.578 passed 00:08:01.578 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-25 18:35:02.106416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:01.578 [2024-07-25 18:35:02.106872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:01.578 [2024-07-25 18:35:02.107332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:01.578 [2024-07-25 18:35:02.107874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:01.578 [2024-07-25 18:35:02.108350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:01.578 [2024-07-25 18:35:02.108800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:01.578 [2024-07-25 18:35:02.109244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:01.578 [2024-07-25 18:35:02.109678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:01.578 [2024-07-25 18:35:02.110321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:01.578 [2024-07-25 18:35:02.110814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:01.578 [2024-07-25 18:35:02.111298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:01.578 passed 00:08:01.578 Test: dif_apptag_mask_test ...[2024-07-25 18:35:02.111936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:01.578 [2024-07-25 18:35:02.112360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:01.578 passed 00:08:01.578 Test: dif_sec_8_md_8_error_test ...[2024-07-25 18:35:02.112866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:08:01.578 passed 00:08:01.578 Test: dif_sec_512_md_0_error_test ...[2024-07-25 18:35:02.113252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.578 passed 00:08:01.578 Test: dif_sec_512_md_16_error_test ...[2024-07-25 18:35:02.113576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:01.578 [2024-07-25 18:35:02.113756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:01.578 passed 00:08:01.578 Test: dif_sec_4096_md_0_8_error_test ...[2024-07-25 18:35:02.114064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.578 [2024-07-25 18:35:02.114220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.578 [2024-07-25 18:35:02.114424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.578 [2024-07-25 18:35:02.114568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.578 passed 00:08:01.578 Test: dif_sec_4100_md_128_error_test ...[2024-07-25 18:35:02.114903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:01.578 [2024-07-25 18:35:02.115128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:01.578 passed 00:08:01.578 Test: dif_guard_seed_test ...passed 00:08:01.578 Test: dif_guard_value_test ...passed 00:08:01.578 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:01.578 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:01.578 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:01.578 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:01.578 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:01.840 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:01.840 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:01.840 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:01.840 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:01.840 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:01.840 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:01.840 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 18:35:02.163610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=bd4c, Actual=fd4c 00:08:01.840 [2024-07-25 18:35:02.166229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=be21, Actual=fe21 00:08:01.840 [2024-07-25 18:35:02.168815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.840 [2024-07-25 18:35:02.171410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.840 [2024-07-25 18:35:02.174008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.840 [2024-07-25 18:35:02.176625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.840 [2024-07-25 18:35:02.179228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=3837 00:08:01.840 [2024-07-25 18:35:02.180946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe21, Actual=104d 00:08:01.841 [2024-07-25 18:35:02.182650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab713ed, Actual=1ab753ed 00:08:01.841 [2024-07-25 18:35:02.185235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38570660, Actual=38574660 00:08:01.841 [2024-07-25 18:35:02.187849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.190466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.193064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.841 [2024-07-25 18:35:02.195663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.841 [2024-07-25 18:35:02.198258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=e8c0263d 00:08:01.841 [2024-07-25 18:35:02.199961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38574660, Actual=abc7821b 00:08:01.841 [2024-07-25 18:35:02.201659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.841 [2024-07-25 18:35:02.204281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4837e266, Actual=88010a2d4837a266 00:08:01.841 [2024-07-25 18:35:02.206879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.209456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.212065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4000005b 00:08:01.841 [2024-07-25 18:35:02.214654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4000005b 00:08:01.841 [2024-07-25 18:35:02.217237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.841 [2024-07-25 18:35:02.219001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4837a266, Actual=f504e00505f4ba64 00:08:01.841 passed 00:08:01.841 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-25 18:35:02.220021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:08:01.841 [2024-07-25 18:35:02.220488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:08:01.841 [2024-07-25 18:35:02.220905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.221347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.221786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.222258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.222691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3837 00:08:01.841 [2024-07-25 18:35:02.223058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=104d 00:08:01.841 [2024-07-25 18:35:02.223421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:08:01.841 [2024-07-25 18:35:02.223858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:08:01.841 [2024-07-25 18:35:02.224261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.224730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.225163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.225596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.226023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8c0263d 00:08:01.841 [2024-07-25 18:35:02.226397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=abc7821b 00:08:01.841 [2024-07-25 18:35:02.226753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.841 [2024-07-25 18:35:02.227208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837e266, Actual=88010a2d4837a266 00:08:01.841 [2024-07-25 18:35:02.227667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.228104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.228545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.841 [2024-07-25 18:35:02.228967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.841 [2024-07-25 18:35:02.229412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.841 [2024-07-25 18:35:02.229829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f504e00505f4ba64 00:08:01.841 passed 00:08:01.841 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-25 18:35:02.230427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:08:01.841 [2024-07-25 18:35:02.230838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:08:01.841 [2024-07-25 18:35:02.231264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.231707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.232134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.232587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.233013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3837 00:08:01.841 [2024-07-25 18:35:02.233376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=104d 00:08:01.841 [2024-07-25 18:35:02.233737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:08:01.841 [2024-07-25 18:35:02.234559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:08:01.841 [2024-07-25 18:35:02.235081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.235639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.236335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.237014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.841 [2024-07-25 18:35:02.237671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8c0263d 00:08:01.841 [2024-07-25 18:35:02.238251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=abc7821b 00:08:01.841 [2024-07-25 18:35:02.238815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.841 [2024-07-25 18:35:02.239526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837e266, Actual=88010a2d4837a266 00:08:01.841 [2024-07-25 18:35:02.240186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.240844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.241545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.841 [2024-07-25 18:35:02.242243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.841 [2024-07-25 18:35:02.242891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.841 [2024-07-25 18:35:02.243520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f504e00505f4ba64 00:08:01.841 passed 00:08:01.841 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-25 18:35:02.244407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:08:01.841 [2024-07-25 18:35:02.245080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:08:01.841 [2024-07-25 18:35:02.245741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.841 [2024-07-25 18:35:02.246430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.247087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.247814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.248255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3837 00:08:01.842 [2024-07-25 18:35:02.248613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=104d 00:08:01.842 [2024-07-25 18:35:02.248981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:08:01.842 [2024-07-25 18:35:02.249389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:08:01.842 [2024-07-25 18:35:02.249837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.250287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.250717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.251141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.251596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8c0263d 00:08:01.842 [2024-07-25 18:35:02.251965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=abc7821b 00:08:01.842 [2024-07-25 18:35:02.252333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.842 [2024-07-25 18:35:02.252791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837e266, Actual=88010a2d4837a266 00:08:01.842 [2024-07-25 18:35:02.253208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.253638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.254077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.842 [2024-07-25 18:35:02.254513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.842 [2024-07-25 18:35:02.254945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.842 [2024-07-25 18:35:02.255333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f504e00505f4ba64 00:08:01.842 passed 00:08:01.842 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-25 18:35:02.255903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:08:01.842 [2024-07-25 18:35:02.256308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:08:01.842 [2024-07-25 18:35:02.256723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.257136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.257561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.258030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.258452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3837 00:08:01.842 [2024-07-25 18:35:02.258806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=104d 00:08:01.842 passed 00:08:01.842 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-25 18:35:02.259394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:08:01.842 [2024-07-25 18:35:02.259840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:08:01.842 [2024-07-25 18:35:02.260278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.260726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.261170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.261584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.262007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8c0263d 00:08:01.842 [2024-07-25 18:35:02.262354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=abc7821b 00:08:01.842 [2024-07-25 18:35:02.262757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.842 [2024-07-25 18:35:02.263192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837e266, Actual=88010a2d4837a266 00:08:01.842 [2024-07-25 18:35:02.263632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.264050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.264480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.842 [2024-07-25 18:35:02.264895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.842 [2024-07-25 18:35:02.265317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.842 [2024-07-25 18:35:02.265702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f504e00505f4ba64 00:08:01.842 passed 00:08:01.842 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-25 18:35:02.266292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:08:01.842 [2024-07-25 18:35:02.266694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:08:01.842 [2024-07-25 18:35:02.267102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.267538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.267954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.268415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.268834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3837 00:08:01.842 [2024-07-25 18:35:02.269194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=104d 00:08:01.842 passed 00:08:01.842 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-25 18:35:02.269749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:08:01.842 [2024-07-25 18:35:02.270178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:08:01.842 [2024-07-25 18:35:02.270592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.271030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.271487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.271908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:08:01.842 [2024-07-25 18:35:02.272326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8c0263d 00:08:01.842 [2024-07-25 18:35:02.272682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=abc7821b 00:08:01.842 [2024-07-25 18:35:02.273058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.842 [2024-07-25 18:35:02.273502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837e266, Actual=88010a2d4837a266 00:08:01.842 [2024-07-25 18:35:02.273946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.274360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:08:01.842 [2024-07-25 18:35:02.274785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.842 [2024-07-25 18:35:02.275197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:08:01.843 [2024-07-25 18:35:02.275632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.843 [2024-07-25 18:35:02.276004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f504e00505f4ba64 00:08:01.843 passed 00:08:01.843 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:01.843 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:01.843 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:01.843 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:01.843 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:01.843 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:01.843 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:01.843 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:01.843 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:01.843 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 18:35:02.322248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=bd4c, Actual=fd4c 00:08:01.843 [2024-07-25 18:35:02.323470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a68a, Actual=e68a 00:08:01.843 [2024-07-25 18:35:02.324674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.325886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.327073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.843 [2024-07-25 18:35:02.328292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.843 [2024-07-25 18:35:02.329523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=3837 00:08:01.843 [2024-07-25 18:35:02.330723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=88b7 00:08:01.843 [2024-07-25 18:35:02.331961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab713ed, Actual=1ab753ed 00:08:01.843 [2024-07-25 18:35:02.333172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=e471df8a, Actual=e4719f8a 00:08:01.843 [2024-07-25 18:35:02.334386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.335593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.336832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.843 [2024-07-25 18:35:02.338051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:01.843 [2024-07-25 18:35:02.339260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=e8c0263d 00:08:01.843 [2024-07-25 18:35:02.340467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=ea1bb594 00:08:01.843 [2024-07-25 18:35:02.341685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.843 [2024-07-25 18:35:02.342895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=b79c8b01d7002031, Actual=b79c8b01d7006031 00:08:01.843 [2024-07-25 18:35:02.344145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.345347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.346571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4000005b 00:08:01.843 [2024-07-25 18:35:02.347784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4000005b 00:08:01.843 [2024-07-25 18:35:02.348990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.843 [2024-07-25 18:35:02.350192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=d5b09ea33bbb26f7 00:08:01.843 passed 00:08:01.843 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 18:35:02.350767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:08:01.843 [2024-07-25 18:35:02.351157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:08:01.843 [2024-07-25 18:35:02.351555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.351935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.352319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:01.843 [2024-07-25 18:35:02.352704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:01.843 [2024-07-25 18:35:02.353086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3837 00:08:01.843 [2024-07-25 18:35:02.353453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7cac 00:08:01.843 [2024-07-25 18:35:02.353849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab713ed, Actual=1ab753ed 00:08:01.843 [2024-07-25 18:35:02.354228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=647fe08, Actual=647be08 00:08:01.843 [2024-07-25 18:35:02.354612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.355011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.355403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:01.843 [2024-07-25 18:35:02.355789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:01.843 [2024-07-25 18:35:02.356161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=e8c0263d 00:08:01.843 [2024-07-25 18:35:02.356533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=82d9416 00:08:01.843 [2024-07-25 18:35:02.356894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:01.843 [2024-07-25 18:35:02.357297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=57011fe1e83c2bab, Actual=57011fe1e83c6bab 00:08:01.843 [2024-07-25 18:35:02.357664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.358055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:01.843 [2024-07-25 18:35:02.358435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:08:01.843 [2024-07-25 18:35:02.358820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:08:01.843 [2024-07-25 18:35:02.359191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:01.843 [2024-07-25 18:35:02.359616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=352d0a4304872d6d 00:08:01.843 passed 00:08:01.843 Test: dix_sec_0_md_8_error ...[2024-07-25 18:35:02.359930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:08:01.843 passed 00:08:01.843 Test: dix_sec_512_md_0_error ...[2024-07-25 18:35:02.360226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.843 passed 00:08:01.843 Test: dix_sec_512_md_16_error ...[2024-07-25 18:35:02.360493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:01.843 [2024-07-25 18:35:02.360626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:08:01.843 passed 00:08:01.843 Test: dix_sec_4096_md_0_8_error ...[2024-07-25 18:35:02.360900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.843 [2024-07-25 18:35:02.361025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.843 [2024-07-25 18:35:02.361117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.843 [2024-07-25 18:35:02.361201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:01.843 passed 00:08:01.843 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:01.843 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:01.843 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:01.843 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:01.843 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:01.843 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:01.843 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:01.844 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:01.844 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:01.844 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 18:35:02.406507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=bd4c, Actual=fd4c 00:08:01.844 [2024-07-25 18:35:02.407737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a68a, Actual=e68a 00:08:01.844 [2024-07-25 18:35:02.408944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:01.844 [2024-07-25 18:35:02.410148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.411383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:02.104 [2024-07-25 18:35:02.412581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:02.104 [2024-07-25 18:35:02.413793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=3837 00:08:02.104 [2024-07-25 18:35:02.414978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=88b7 00:08:02.104 [2024-07-25 18:35:02.416191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab713ed, Actual=1ab753ed 00:08:02.104 [2024-07-25 18:35:02.417377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=e471df8a, Actual=e4719f8a 00:08:02.104 [2024-07-25 18:35:02.418605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.419804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.421001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:02.104 [2024-07-25 18:35:02.422200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=405b 00:08:02.104 [2024-07-25 18:35:02.423410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=e8c0263d 00:08:02.104 [2024-07-25 18:35:02.424616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=ea1bb594 00:08:02.104 [2024-07-25 18:35:02.425853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:02.104 [2024-07-25 18:35:02.427043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=b79c8b01d7002031, Actual=b79c8b01d7006031 00:08:02.104 [2024-07-25 18:35:02.428255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.429448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.430650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4000005b 00:08:02.104 [2024-07-25 18:35:02.431849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=4000005b 00:08:02.104 [2024-07-25 18:35:02.433057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:02.104 [2024-07-25 18:35:02.434262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=d5b09ea33bbb26f7 00:08:02.104 passed 00:08:02.104 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 18:35:02.434799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:08:02.104 [2024-07-25 18:35:02.435166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:08:02.104 [2024-07-25 18:35:02.435534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.435927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.436321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:02.104 [2024-07-25 18:35:02.436704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:02.104 [2024-07-25 18:35:02.437070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=3837 00:08:02.104 [2024-07-25 18:35:02.437441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7cac 00:08:02.104 [2024-07-25 18:35:02.437816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab713ed, Actual=1ab753ed 00:08:02.104 [2024-07-25 18:35:02.438202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=647fe08, Actual=647be08 00:08:02.104 [2024-07-25 18:35:02.438583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.438948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.439311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:02.104 [2024-07-25 18:35:02.439701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4059 00:08:02.104 [2024-07-25 18:35:02.440064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=e8c0263d 00:08:02.104 [2024-07-25 18:35:02.440435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=82d9416 00:08:02.104 [2024-07-25 18:35:02.440828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc60d3, Actual=a576a7728ecc20d3 00:08:02.104 [2024-07-25 18:35:02.441191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=57011fe1e83c2bab, Actual=57011fe1e83c6bab 00:08:02.104 [2024-07-25 18:35:02.441578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.441973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:08:02.104 [2024-07-25 18:35:02.442352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:08:02.104 [2024-07-25 18:35:02.442714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:08:02.104 [2024-07-25 18:35:02.443108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=bc82b6b67a35daf1 00:08:02.104 [2024-07-25 18:35:02.443491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=352d0a4304872d6d 00:08:02.104 passed 00:08:02.104 Test: set_md_interleave_iovs_test ...passed 00:08:02.104 Test: set_md_interleave_iovs_split_test ...passed 00:08:02.104 Test: dif_generate_stream_pi_16_test ...passed 00:08:02.104 Test: dif_generate_stream_test ...passed 00:08:02.104 Test: set_md_interleave_iovs_alignment_test ...[2024-07-25 18:35:02.452194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1857:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:02.104 passed 00:08:02.104 Test: dif_generate_split_test ...passed 00:08:02.104 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:02.104 Test: dif_verify_split_test ...passed 00:08:02.104 Test: dif_verify_stream_multi_segments_test ...passed 00:08:02.104 Test: update_crc32c_pi_16_test ...passed 00:08:02.104 Test: update_crc32c_test ...passed 00:08:02.104 Test: dif_update_crc32c_split_test ...passed 00:08:02.104 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:02.104 Test: get_range_with_md_test ...passed 00:08:02.104 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:02.105 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:02.105 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:02.105 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:02.105 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:02.105 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:02.105 Test: dif_generate_and_verify_unmap_test ...passed 00:08:02.105 Test: dif_pi_format_check_test ...passed 00:08:02.105 Test: dif_type_check_test ...passed 00:08:02.105 00:08:02.105 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.105 suites 1 1 n/a 0 0 00:08:02.105 tests 86 86 86 0 0 00:08:02.105 asserts 3605 3605 3605 0 n/a 00:08:02.105 00:08:02.105 Elapsed time = 0.351 seconds 00:08:02.105 18:35:02 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:02.105 00:08:02.105 00:08:02.105 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.105 http://cunit.sourceforge.net/ 00:08:02.105 00:08:02.105 00:08:02.105 Suite: iov 00:08:02.105 Test: test_single_iov ...passed 00:08:02.105 Test: test_simple_iov ...passed 00:08:02.105 Test: test_complex_iov ...passed 00:08:02.105 Test: test_iovs_to_buf ...passed 00:08:02.105 Test: test_buf_to_iovs ...passed 00:08:02.105 Test: test_memset ...passed 00:08:02.105 Test: test_iov_one ...passed 00:08:02.105 Test: test_iov_xfer ...passed 00:08:02.105 00:08:02.105 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.105 suites 1 1 n/a 0 0 00:08:02.105 tests 8 8 8 0 0 00:08:02.105 asserts 156 156 156 0 n/a 00:08:02.105 00:08:02.105 Elapsed time = 0.000 seconds 00:08:02.105 18:35:02 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:02.105 00:08:02.105 00:08:02.105 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.105 http://cunit.sourceforge.net/ 00:08:02.105 00:08:02.105 00:08:02.105 Suite: math 00:08:02.105 Test: test_serial_number_arithmetic ...passed 00:08:02.105 Suite: erase 00:08:02.105 Test: test_memset_s ...passed 00:08:02.105 00:08:02.105 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.105 suites 2 2 n/a 0 0 00:08:02.105 tests 2 2 2 0 0 00:08:02.105 asserts 18 18 18 0 n/a 00:08:02.105 00:08:02.105 Elapsed time = 0.000 seconds 00:08:02.105 18:35:02 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:02.105 00:08:02.105 00:08:02.105 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.105 http://cunit.sourceforge.net/ 00:08:02.105 00:08:02.105 00:08:02.105 Suite: pipe 00:08:02.105 Test: test_create_destroy ...passed 00:08:02.105 Test: test_write_get_buffer ...passed 00:08:02.105 Test: test_write_advance ...passed 00:08:02.105 Test: test_read_get_buffer ...passed 00:08:02.105 Test: test_read_advance ...passed 00:08:02.105 Test: test_data ...passed 00:08:02.105 00:08:02.105 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.105 suites 1 1 n/a 0 0 00:08:02.105 tests 6 6 6 0 0 00:08:02.105 asserts 251 251 251 0 n/a 00:08:02.105 00:08:02.105 Elapsed time = 0.000 seconds 00:08:02.105 18:35:02 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:02.105 00:08:02.105 00:08:02.105 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.105 http://cunit.sourceforge.net/ 00:08:02.105 00:08:02.105 00:08:02.105 Suite: xor 00:08:02.105 Test: test_xor_gen ...passed 00:08:02.105 00:08:02.105 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.105 suites 1 1 n/a 0 0 00:08:02.105 tests 1 1 1 0 0 00:08:02.105 asserts 17 17 17 0 n/a 00:08:02.105 00:08:02.105 Elapsed time = 0.014 seconds 00:08:02.364 00:08:02.364 real 0m0.930s 00:08:02.364 user 0m0.594s 00:08:02.364 sys 0m0.292s 00:08:02.364 18:35:02 unittest.unittest_util -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.364 18:35:02 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:08:02.364 ************************************ 00:08:02.364 END TEST unittest_util 00:08:02.364 ************************************ 00:08:02.364 18:35:02 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:02.364 18:35:02 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:02.364 18:35:02 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.364 18:35:02 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.364 18:35:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:02.364 ************************************ 00:08:02.364 START TEST unittest_vhost 00:08:02.364 ************************************ 00:08:02.364 18:35:02 unittest.unittest_vhost -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:02.364 00:08:02.364 00:08:02.364 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.364 http://cunit.sourceforge.net/ 00:08:02.364 00:08:02.364 00:08:02.365 Suite: vhost_suite 00:08:02.365 Test: desc_to_iov_test ...[2024-07-25 18:35:02.797407] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:02.365 passed 00:08:02.365 Test: create_controller_test ...[2024-07-25 18:35:02.805982] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:02.365 [2024-07-25 18:35:02.806562] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:02.365 [2024-07-25 18:35:02.807122] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:02.365 [2024-07-25 18:35:02.807551] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:02.365 [2024-07-25 18:35:02.807917] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:02.365 [2024-07-25 18:35:02.808933] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:08:02.365 [2024-07-25 18:35:02.811414] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:02.365 passed 00:08:02.365 Test: session_find_by_vid_test ...passed 00:08:02.365 Test: remove_controller_test ...[2024-07-25 18:35:02.816433] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:02.365 passed 00:08:02.365 Test: vq_avail_ring_get_test ...passed 00:08:02.365 Test: vq_packed_ring_test ...passed 00:08:02.365 Test: vhost_blk_construct_test ...passed 00:08:02.365 00:08:02.365 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.365 suites 1 1 n/a 0 0 00:08:02.365 tests 7 7 7 0 0 00:08:02.365 asserts 147 147 147 0 n/a 00:08:02.365 00:08:02.365 Elapsed time = 0.020 seconds 00:08:02.365 00:08:02.365 real 0m0.077s 00:08:02.365 user 0m0.040s 00:08:02.365 sys 0m0.032s 00:08:02.365 18:35:02 unittest.unittest_vhost -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.365 18:35:02 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:08:02.365 ************************************ 00:08:02.365 END TEST unittest_vhost 00:08:02.365 ************************************ 00:08:02.365 18:35:02 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:02.365 18:35:02 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.365 18:35:02 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.365 18:35:02 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:02.365 ************************************ 00:08:02.365 START TEST unittest_dma 00:08:02.365 ************************************ 00:08:02.365 18:35:02 unittest.unittest_dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:02.365 00:08:02.365 00:08:02.365 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.365 http://cunit.sourceforge.net/ 00:08:02.365 00:08:02.365 00:08:02.365 Suite: dma_suite 00:08:02.365 Test: test_dma ...[2024-07-25 18:35:02.933208] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:02.365 passed 00:08:02.365 00:08:02.365 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.365 suites 1 1 n/a 0 0 00:08:02.365 tests 1 1 1 0 0 00:08:02.365 asserts 54 54 54 0 n/a 00:08:02.365 00:08:02.365 Elapsed time = 0.001 seconds 00:08:02.625 00:08:02.625 real 0m0.040s 00:08:02.625 user 0m0.016s 00:08:02.625 sys 0m0.024s 00:08:02.625 18:35:02 unittest.unittest_dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.625 18:35:02 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:02.625 ************************************ 00:08:02.625 END TEST unittest_dma 00:08:02.625 ************************************ 00:08:02.625 18:35:03 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:02.625 18:35:03 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.625 18:35:03 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.625 18:35:03 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:02.625 ************************************ 00:08:02.625 START TEST unittest_init 00:08:02.625 ************************************ 00:08:02.625 18:35:03 unittest.unittest_init -- common/autotest_common.sh@1125 -- # unittest_init 00:08:02.625 18:35:03 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:02.625 00:08:02.625 00:08:02.625 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.625 http://cunit.sourceforge.net/ 00:08:02.625 00:08:02.625 00:08:02.625 Suite: subsystem_suite 00:08:02.625 Test: subsystem_sort_test_depends_on_single ...passed 00:08:02.625 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:02.625 Test: subsystem_sort_test_missing_dependency ...[2024-07-25 18:35:03.054728] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:02.625 [2024-07-25 18:35:03.055067] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:02.625 passed 00:08:02.625 00:08:02.625 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.625 suites 1 1 n/a 0 0 00:08:02.625 tests 3 3 3 0 0 00:08:02.625 asserts 20 20 20 0 n/a 00:08:02.625 00:08:02.625 Elapsed time = 0.001 seconds 00:08:02.625 00:08:02.625 real 0m0.048s 00:08:02.625 user 0m0.030s 00:08:02.625 sys 0m0.018s 00:08:02.625 18:35:03 unittest.unittest_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.625 18:35:03 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:02.625 ************************************ 00:08:02.625 END TEST unittest_init 00:08:02.625 ************************************ 00:08:02.625 18:35:03 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:02.625 18:35:03 unittest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.625 18:35:03 unittest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.625 18:35:03 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:02.625 ************************************ 00:08:02.625 START TEST unittest_keyring 00:08:02.625 ************************************ 00:08:02.625 18:35:03 unittest.unittest_keyring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:02.625 00:08:02.625 00:08:02.625 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.625 http://cunit.sourceforge.net/ 00:08:02.625 00:08:02.625 00:08:02.625 Suite: keyring 00:08:02.625 Test: test_keyring_add_remove ...[2024-07-25 18:35:03.167879] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:02.625 [2024-07-25 18:35:03.168241] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:02.625 passed 00:08:02.626 Test: test_keyring_get_put ...passed 00:08:02.626 00:08:02.626 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.626 suites 1 1 n/a 0 0 00:08:02.626 tests 2 2 2 0 0 00:08:02.626 asserts 44 44 44 0 n/a 00:08:02.626 00:08:02.626 Elapsed time = 0.001 seconds 00:08:02.626 [2024-07-25 18:35:03.168339] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:02.626 00:08:02.626 real 0m0.043s 00:08:02.626 user 0m0.029s 00:08:02.626 sys 0m0.014s 00:08:02.626 18:35:03 unittest.unittest_keyring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.626 18:35:03 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:02.626 ************************************ 00:08:02.626 END TEST unittest_keyring 00:08:02.626 ************************************ 00:08:02.887 18:35:03 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:02.887 18:35:03 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:02.887 18:35:03 unittest -- unit/unittest.sh@293 -- # hostname 00:08:02.887 18:35:03 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:03.147 geninfo: WARNING: invalid characters removed from testname! 00:08:29.732 18:35:28 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:32.269 18:35:32 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:34.802 18:35:34 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:37.336 18:35:37 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:39.870 18:35:39 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:41.775 18:35:42 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:44.310 18:35:44 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:46.287 18:35:46 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:46.287 18:35:46 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:47.224 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:47.224 Found 322 entries. 00:08:47.224 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:47.224 Writing .css and .png files. 00:08:47.224 Generating output. 00:08:47.224 Processing file include/linux/virtio_ring.h 00:08:47.224 Processing file include/spdk/bdev_module.h 00:08:47.224 Processing file include/spdk/nvmf_transport.h 00:08:47.224 Processing file include/spdk/util.h 00:08:47.224 Processing file include/spdk/nvme_spec.h 00:08:47.224 Processing file include/spdk/histogram_data.h 00:08:47.224 Processing file include/spdk/trace.h 00:08:47.224 Processing file include/spdk/mmio.h 00:08:47.224 Processing file include/spdk/thread.h 00:08:47.224 Processing file include/spdk/base64.h 00:08:47.224 Processing file include/spdk/endian.h 00:08:47.224 Processing file include/spdk/nvme.h 00:08:47.483 Processing file include/spdk_internal/nvme_tcp.h 00:08:47.483 Processing file include/spdk_internal/sgl.h 00:08:47.483 Processing file include/spdk_internal/utf.h 00:08:47.483 Processing file include/spdk_internal/sock.h 00:08:47.483 Processing file include/spdk_internal/virtio.h 00:08:47.483 Processing file include/spdk_internal/rdma_utils.h 00:08:47.483 Processing file lib/accel/accel_sw.c 00:08:47.483 Processing file lib/accel/accel_rpc.c 00:08:47.483 Processing file lib/accel/accel.c 00:08:47.743 Processing file lib/bdev/bdev_rpc.c 00:08:47.743 Processing file lib/bdev/bdev.c 00:08:47.743 Processing file lib/bdev/bdev_zone.c 00:08:47.743 Processing file lib/bdev/part.c 00:08:47.743 Processing file lib/bdev/scsi_nvme.c 00:08:48.002 Processing file lib/blob/blobstore.h 00:08:48.002 Processing file lib/blob/zeroes.c 00:08:48.002 Processing file lib/blob/blob_bs_dev.c 00:08:48.002 Processing file lib/blob/blobstore.c 00:08:48.002 Processing file lib/blob/request.c 00:08:48.002 Processing file lib/blobfs/tree.c 00:08:48.002 Processing file lib/blobfs/blobfs.c 00:08:48.002 Processing file lib/conf/conf.c 00:08:48.261 Processing file lib/dma/dma.c 00:08:48.520 Processing file lib/env_dpdk/threads.c 00:08:48.520 Processing file lib/env_dpdk/sigbus_handler.c 00:08:48.520 Processing file lib/env_dpdk/pci_vmd.c 00:08:48.520 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:48.520 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:48.520 Processing file lib/env_dpdk/pci_dpdk.c 00:08:48.520 Processing file lib/env_dpdk/pci.c 00:08:48.520 Processing file lib/env_dpdk/memory.c 00:08:48.520 Processing file lib/env_dpdk/env.c 00:08:48.520 Processing file lib/env_dpdk/pci_virtio.c 00:08:48.520 Processing file lib/env_dpdk/pci_idxd.c 00:08:48.520 Processing file lib/env_dpdk/pci_ioat.c 00:08:48.520 Processing file lib/env_dpdk/init.c 00:08:48.520 Processing file lib/env_dpdk/pci_event.c 00:08:48.520 Processing file lib/event/log_rpc.c 00:08:48.520 Processing file lib/event/app_rpc.c 00:08:48.520 Processing file lib/event/app.c 00:08:48.520 Processing file lib/event/reactor.c 00:08:48.520 Processing file lib/event/scheduler_static.c 00:08:49.089 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:49.089 Processing file lib/ftl/ftl_l2p_cache.c 00:08:49.089 Processing file lib/ftl/ftl_init.c 00:08:49.089 Processing file lib/ftl/ftl_debug.c 00:08:49.089 Processing file lib/ftl/ftl_reloc.c 00:08:49.089 Processing file lib/ftl/ftl_core.h 00:08:49.089 Processing file lib/ftl/ftl_writer.h 00:08:49.089 Processing file lib/ftl/ftl_core.c 00:08:49.089 Processing file lib/ftl/ftl_debug.h 00:08:49.089 Processing file lib/ftl/ftl_rq.c 00:08:49.089 Processing file lib/ftl/ftl_layout.c 00:08:49.089 Processing file lib/ftl/ftl_sb.c 00:08:49.089 Processing file lib/ftl/ftl_l2p.c 00:08:49.089 Processing file lib/ftl/ftl_writer.c 00:08:49.089 Processing file lib/ftl/ftl_nv_cache.h 00:08:49.089 Processing file lib/ftl/ftl_p2l.c 00:08:49.089 Processing file lib/ftl/ftl_l2p_flat.c 00:08:49.089 Processing file lib/ftl/ftl_trace.c 00:08:49.089 Processing file lib/ftl/ftl_io.h 00:08:49.089 Processing file lib/ftl/ftl_band_ops.c 00:08:49.089 Processing file lib/ftl/ftl_band.h 00:08:49.089 Processing file lib/ftl/ftl_band.c 00:08:49.089 Processing file lib/ftl/ftl_nv_cache.c 00:08:49.089 Processing file lib/ftl/ftl_io.c 00:08:49.089 Processing file lib/ftl/base/ftl_base_dev.c 00:08:49.089 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:49.348 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:49.348 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:49.348 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:08:49.348 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:49.607 Processing file lib/ftl/utils/ftl_property.c 00:08:49.607 Processing file lib/ftl/utils/ftl_property.h 00:08:49.607 Processing file lib/ftl/utils/ftl_md.c 00:08:49.607 Processing file lib/ftl/utils/ftl_df.h 00:08:49.607 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:49.607 Processing file lib/ftl/utils/ftl_conf.c 00:08:49.607 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:49.607 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:49.607 Processing file lib/ftl/utils/ftl_mempool.c 00:08:49.607 Processing file lib/idxd/idxd_user.c 00:08:49.607 Processing file lib/idxd/idxd.c 00:08:49.607 Processing file lib/idxd/idxd_internal.h 00:08:49.866 Processing file lib/init/rpc.c 00:08:49.866 Processing file lib/init/subsystem_rpc.c 00:08:49.866 Processing file lib/init/subsystem.c 00:08:49.866 Processing file lib/init/json_config.c 00:08:49.866 Processing file lib/ioat/ioat_internal.h 00:08:49.866 Processing file lib/ioat/ioat.c 00:08:50.126 Processing file lib/iscsi/iscsi_subsystem.c 00:08:50.126 Processing file lib/iscsi/md5.c 00:08:50.126 Processing file lib/iscsi/task.h 00:08:50.126 Processing file lib/iscsi/conn.c 00:08:50.126 Processing file lib/iscsi/task.c 00:08:50.126 Processing file lib/iscsi/init_grp.c 00:08:50.126 Processing file lib/iscsi/iscsi.c 00:08:50.126 Processing file lib/iscsi/param.c 00:08:50.126 Processing file lib/iscsi/tgt_node.c 00:08:50.126 Processing file lib/iscsi/iscsi_rpc.c 00:08:50.126 Processing file lib/iscsi/portal_grp.c 00:08:50.126 Processing file lib/iscsi/iscsi.h 00:08:50.385 Processing file lib/json/json_parse.c 00:08:50.385 Processing file lib/json/json_util.c 00:08:50.385 Processing file lib/json/json_write.c 00:08:50.385 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:50.385 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:50.385 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:50.385 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:50.385 Processing file lib/keyring/keyring_rpc.c 00:08:50.385 Processing file lib/keyring/keyring.c 00:08:50.644 Processing file lib/log/log.c 00:08:50.644 Processing file lib/log/log_deprecated.c 00:08:50.644 Processing file lib/log/log_flags.c 00:08:50.644 Processing file lib/lvol/lvol.c 00:08:50.644 Processing file lib/nbd/nbd_rpc.c 00:08:50.644 Processing file lib/nbd/nbd.c 00:08:50.903 Processing file lib/notify/notify_rpc.c 00:08:50.903 Processing file lib/notify/notify.c 00:08:51.470 Processing file lib/nvme/nvme_fabric.c 00:08:51.470 Processing file lib/nvme/nvme_ctrlr.c 00:08:51.470 Processing file lib/nvme/nvme_rdma.c 00:08:51.470 Processing file lib/nvme/nvme_poll_group.c 00:08:51.470 Processing file lib/nvme/nvme_opal.c 00:08:51.470 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:51.470 Processing file lib/nvme/nvme.c 00:08:51.470 Processing file lib/nvme/nvme_pcie.c 00:08:51.470 Processing file lib/nvme/nvme_transport.c 00:08:51.470 Processing file lib/nvme/nvme_tcp.c 00:08:51.470 Processing file lib/nvme/nvme_ns.c 00:08:51.470 Processing file lib/nvme/nvme_ns_cmd.c 00:08:51.470 Processing file lib/nvme/nvme_auth.c 00:08:51.470 Processing file lib/nvme/nvme_cuse.c 00:08:51.470 Processing file lib/nvme/nvme_discovery.c 00:08:51.470 Processing file lib/nvme/nvme_quirks.c 00:08:51.470 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:51.470 Processing file lib/nvme/nvme_pcie_common.c 00:08:51.470 Processing file lib/nvme/nvme_internal.h 00:08:51.470 Processing file lib/nvme/nvme_qpair.c 00:08:51.470 Processing file lib/nvme/nvme_pcie_internal.h 00:08:51.470 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:51.470 Processing file lib/nvme/nvme_zns.c 00:08:51.470 Processing file lib/nvme/nvme_io_msg.c 00:08:52.039 Processing file lib/nvmf/subsystem.c 00:08:52.039 Processing file lib/nvmf/transport.c 00:08:52.039 Processing file lib/nvmf/nvmf_internal.h 00:08:52.039 Processing file lib/nvmf/rdma.c 00:08:52.039 Processing file lib/nvmf/nvmf_rpc.c 00:08:52.039 Processing file lib/nvmf/auth.c 00:08:52.039 Processing file lib/nvmf/ctrlr_discovery.c 00:08:52.039 Processing file lib/nvmf/ctrlr.c 00:08:52.039 Processing file lib/nvmf/tcp.c 00:08:52.039 Processing file lib/nvmf/nvmf.c 00:08:52.039 Processing file lib/nvmf/ctrlr_bdev.c 00:08:52.039 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:08:52.039 Processing file lib/rdma_provider/common.c 00:08:52.298 Processing file lib/rdma_utils/rdma_utils.c 00:08:52.298 Processing file lib/rpc/rpc.c 00:08:52.557 Processing file lib/scsi/scsi.c 00:08:52.557 Processing file lib/scsi/scsi_pr.c 00:08:52.557 Processing file lib/scsi/scsi_rpc.c 00:08:52.557 Processing file lib/scsi/scsi_bdev.c 00:08:52.557 Processing file lib/scsi/lun.c 00:08:52.557 Processing file lib/scsi/task.c 00:08:52.557 Processing file lib/scsi/dev.c 00:08:52.557 Processing file lib/scsi/port.c 00:08:52.557 Processing file lib/sock/sock_rpc.c 00:08:52.557 Processing file lib/sock/sock.c 00:08:52.557 Processing file lib/thread/iobuf.c 00:08:52.557 Processing file lib/thread/thread.c 00:08:52.816 Processing file lib/trace/trace_rpc.c 00:08:52.816 Processing file lib/trace/trace.c 00:08:52.816 Processing file lib/trace/trace_flags.c 00:08:52.816 Processing file lib/trace_parser/trace.cpp 00:08:53.073 Processing file lib/ut/ut.c 00:08:53.073 Processing file lib/ut_mock/mock.c 00:08:53.331 Processing file lib/util/bit_array.c 00:08:53.331 Processing file lib/util/net.c 00:08:53.331 Processing file lib/util/math.c 00:08:53.331 Processing file lib/util/crc16.c 00:08:53.331 Processing file lib/util/dif.c 00:08:53.331 Processing file lib/util/crc32c.c 00:08:53.331 Processing file lib/util/cpuset.c 00:08:53.331 Processing file lib/util/base64.c 00:08:53.331 Processing file lib/util/crc32_ieee.c 00:08:53.331 Processing file lib/util/fd.c 00:08:53.331 Processing file lib/util/crc64.c 00:08:53.331 Processing file lib/util/uuid.c 00:08:53.331 Processing file lib/util/crc32.c 00:08:53.331 Processing file lib/util/zipf.c 00:08:53.331 Processing file lib/util/string.c 00:08:53.331 Processing file lib/util/strerror_tls.c 00:08:53.331 Processing file lib/util/pipe.c 00:08:53.331 Processing file lib/util/file.c 00:08:53.331 Processing file lib/util/xor.c 00:08:53.331 Processing file lib/util/iov.c 00:08:53.331 Processing file lib/util/fd_group.c 00:08:53.331 Processing file lib/util/hexlify.c 00:08:53.589 Processing file lib/vfio_user/host/vfio_user.c 00:08:53.589 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:53.847 Processing file lib/vhost/vhost_rpc.c 00:08:53.847 Processing file lib/vhost/vhost_scsi.c 00:08:53.847 Processing file lib/vhost/rte_vhost_user.c 00:08:53.847 Processing file lib/vhost/vhost_blk.c 00:08:53.847 Processing file lib/vhost/vhost.c 00:08:53.847 Processing file lib/vhost/vhost_internal.h 00:08:53.847 Processing file lib/virtio/virtio_vhost_user.c 00:08:53.847 Processing file lib/virtio/virtio_pci.c 00:08:53.847 Processing file lib/virtio/virtio.c 00:08:53.847 Processing file lib/virtio/virtio_vfio_user.c 00:08:54.105 Processing file lib/vmd/led.c 00:08:54.105 Processing file lib/vmd/vmd.c 00:08:54.105 Processing file module/accel/dsa/accel_dsa.c 00:08:54.105 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:54.105 Processing file module/accel/error/accel_error.c 00:08:54.105 Processing file module/accel/error/accel_error_rpc.c 00:08:54.364 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:54.364 Processing file module/accel/iaa/accel_iaa.c 00:08:54.364 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:54.364 Processing file module/accel/ioat/accel_ioat.c 00:08:54.364 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:54.364 Processing file module/bdev/aio/bdev_aio.c 00:08:54.622 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:54.622 Processing file module/bdev/delay/vbdev_delay.c 00:08:54.622 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:54.622 Processing file module/bdev/error/vbdev_error.c 00:08:54.622 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:54.622 Processing file module/bdev/ftl/bdev_ftl.c 00:08:54.881 Processing file module/bdev/gpt/gpt.h 00:08:54.881 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:54.881 Processing file module/bdev/gpt/gpt.c 00:08:54.881 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:54.881 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:55.141 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:55.141 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:55.141 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:55.141 Processing file module/bdev/malloc/bdev_malloc.c 00:08:55.141 Processing file module/bdev/null/bdev_null_rpc.c 00:08:55.141 Processing file module/bdev/null/bdev_null.c 00:08:55.400 Processing file module/bdev/nvme/vbdev_opal.c 00:08:55.400 Processing file module/bdev/nvme/bdev_nvme.c 00:08:55.400 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:55.400 Processing file module/bdev/nvme/nvme_rpc.c 00:08:55.400 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:55.400 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:55.400 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:55.659 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:55.659 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:55.918 Processing file module/bdev/raid/bdev_raid.h 00:08:55.918 Processing file module/bdev/raid/raid1.c 00:08:55.918 Processing file module/bdev/raid/bdev_raid.c 00:08:55.918 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:55.918 Processing file module/bdev/raid/raid5f.c 00:08:55.918 Processing file module/bdev/raid/raid0.c 00:08:55.918 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:55.918 Processing file module/bdev/raid/concat.c 00:08:55.918 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:55.918 Processing file module/bdev/split/vbdev_split.c 00:08:56.177 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:56.177 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:56.177 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:56.177 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:56.177 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:56.177 Processing file module/blob/bdev/blob_bdev.c 00:08:56.436 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:56.436 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:56.436 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:56.436 Processing file module/event/subsystems/accel/accel.c 00:08:56.436 Processing file module/event/subsystems/bdev/bdev.c 00:08:56.695 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:56.695 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:56.695 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:56.695 Processing file module/event/subsystems/keyring/keyring.c 00:08:56.954 Processing file module/event/subsystems/nbd/nbd.c 00:08:56.954 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:56.954 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:56.954 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:57.213 Processing file module/event/subsystems/scsi/scsi.c 00:08:57.213 Processing file module/event/subsystems/sock/sock.c 00:08:57.213 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:57.213 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:57.472 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:57.472 Processing file module/event/subsystems/vmd/vmd.c 00:08:57.472 Processing file module/keyring/file/keyring.c 00:08:57.472 Processing file module/keyring/file/keyring_rpc.c 00:08:57.472 Processing file module/keyring/linux/keyring.c 00:08:57.472 Processing file module/keyring/linux/keyring_rpc.c 00:08:57.731 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:57.731 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:57.731 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:57.990 Processing file module/sock/posix/posix.c 00:08:57.990 Writing directory view page. 00:08:57.990 Overall coverage rate: 00:08:57.990 lines......: 38.7% (41100 of 106166 lines) 00:08:57.990 functions..: 42.4% (3741 of 8830 functions) 00:08:57.990 00:08:57.990 00:08:57.990 ===================== 00:08:57.990 All unit tests passed 00:08:57.990 ===================== 00:08:57.990 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:57.990 18:35:58 unittest -- unit/unittest.sh@305 -- # set +x 00:08:57.990 00:08:57.990 00:08:57.990 00:08:57.990 real 3m33.798s 00:08:57.990 user 2m58.998s 00:08:57.990 sys 0m24.469s 00:08:57.990 18:35:58 unittest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.991 ************************************ 00:08:57.991 END TEST unittest 00:08:57.991 18:35:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:57.991 ************************************ 00:08:57.991 18:35:58 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:57.991 18:35:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:57.991 18:35:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:57.991 18:35:58 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:57.991 18:35:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.991 18:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.991 18:35:58 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:57.991 18:35:58 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:57.991 18:35:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.991 18:35:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.991 18:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:57.991 ************************************ 00:08:57.991 START TEST env 00:08:57.991 ************************************ 00:08:57.991 18:35:58 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:57.991 * Looking for test storage... 00:08:58.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:58.250 18:35:58 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:58.250 18:35:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.250 18:35:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.250 18:35:58 env -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 ************************************ 00:08:58.250 START TEST env_memory 00:08:58.250 ************************************ 00:08:58.250 18:35:58 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:58.250 00:08:58.250 00:08:58.250 CUnit - A unit testing framework for C - Version 2.1-3 00:08:58.250 http://cunit.sourceforge.net/ 00:08:58.250 00:08:58.250 00:08:58.250 Suite: memory 00:08:58.250 Test: alloc and free memory map ...[2024-07-25 18:35:58.651004] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:58.250 passed 00:08:58.250 Test: mem map translation ...[2024-07-25 18:35:58.705536] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:58.250 [2024-07-25 18:35:58.705667] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:58.250 [2024-07-25 18:35:58.705804] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:58.250 [2024-07-25 18:35:58.705903] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:58.250 passed 00:08:58.250 Test: mem map registration ...[2024-07-25 18:35:58.796056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:58.250 [2024-07-25 18:35:58.796174] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:58.511 passed 00:08:58.511 Test: mem map adjacent registrations ...passed 00:08:58.511 00:08:58.511 Run Summary: Type Total Ran Passed Failed Inactive 00:08:58.511 suites 1 1 n/a 0 0 00:08:58.511 tests 4 4 4 0 0 00:08:58.511 asserts 152 152 152 0 n/a 00:08:58.511 00:08:58.511 Elapsed time = 0.316 seconds 00:08:58.511 00:08:58.511 real 0m0.358s 00:08:58.511 user 0m0.342s 00:08:58.511 sys 0m0.016s 00:08:58.511 18:35:58 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.511 18:35:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:58.511 ************************************ 00:08:58.511 END TEST env_memory 00:08:58.511 ************************************ 00:08:58.511 18:35:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:58.511 18:35:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.511 18:35:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.511 18:35:58 env -- common/autotest_common.sh@10 -- # set +x 00:08:58.511 ************************************ 00:08:58.511 START TEST env_vtophys 00:08:58.511 ************************************ 00:08:58.511 18:35:59 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:58.511 EAL: lib.eal log level changed from notice to debug 00:08:58.511 EAL: Detected lcore 0 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 1 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 2 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 3 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 4 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 5 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 6 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 7 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 8 as core 0 on socket 0 00:08:58.511 EAL: Detected lcore 9 as core 0 on socket 0 00:08:58.511 EAL: Maximum logical cores by configuration: 128 00:08:58.511 EAL: Detected CPU lcores: 10 00:08:58.511 EAL: Detected NUMA nodes: 1 00:08:58.511 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:58.511 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:58.511 EAL: Checking presence of .so 'librte_eal.so' 00:08:58.511 EAL: Detected static linkage of DPDK 00:08:58.778 EAL: No shared files mode enabled, IPC will be disabled 00:08:58.778 EAL: Selected IOVA mode 'PA' 00:08:58.778 EAL: Probing VFIO support... 00:08:58.778 EAL: IOMMU type 1 (Type 1) is supported 00:08:58.778 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:58.778 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:58.778 EAL: VFIO support initialized 00:08:58.778 EAL: Ask a virtual area of 0x2e000 bytes 00:08:58.778 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:58.778 EAL: Setting up physically contiguous memory... 00:08:58.778 EAL: Setting maximum number of open files to 1048576 00:08:58.778 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:58.778 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:58.778 EAL: Ask a virtual area of 0x61000 bytes 00:08:58.778 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:58.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:58.778 EAL: Ask a virtual area of 0x400000000 bytes 00:08:58.778 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:58.778 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:58.778 EAL: Ask a virtual area of 0x61000 bytes 00:08:58.778 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:58.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:58.778 EAL: Ask a virtual area of 0x400000000 bytes 00:08:58.778 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:58.778 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:58.778 EAL: Ask a virtual area of 0x61000 bytes 00:08:58.778 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:58.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:58.778 EAL: Ask a virtual area of 0x400000000 bytes 00:08:58.778 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:58.778 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:58.778 EAL: Ask a virtual area of 0x61000 bytes 00:08:58.778 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:58.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:58.778 EAL: Ask a virtual area of 0x400000000 bytes 00:08:58.778 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:58.778 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:58.778 EAL: Hugepages will be freed exactly as allocated. 00:08:58.778 EAL: No shared files mode enabled, IPC is disabled 00:08:58.778 EAL: No shared files mode enabled, IPC is disabled 00:08:58.778 EAL: TSC frequency is ~2100000 KHz 00:08:58.778 EAL: Main lcore 0 is ready (tid=7faa9e0b3a80;cpuset=[0]) 00:08:58.778 EAL: Trying to obtain current memory policy. 00:08:58.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:58.778 EAL: Restoring previous memory policy: 0 00:08:58.778 EAL: request: mp_malloc_sync 00:08:58.778 EAL: No shared files mode enabled, IPC is disabled 00:08:58.778 EAL: Heap on socket 0 was expanded by 2MB 00:08:58.778 EAL: No shared files mode enabled, IPC is disabled 00:08:58.778 EAL: Mem event callback 'spdk:(nil)' registered 00:08:58.778 00:08:58.778 00:08:58.778 CUnit - A unit testing framework for C - Version 2.1-3 00:08:58.778 http://cunit.sourceforge.net/ 00:08:58.778 00:08:58.778 00:08:58.778 Suite: components_suite 00:08:59.347 Test: vtophys_malloc_test ...passed 00:08:59.347 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:59.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.347 EAL: Restoring previous memory policy: 0 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was expanded by 4MB 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was shrunk by 4MB 00:08:59.347 EAL: Trying to obtain current memory policy. 00:08:59.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.347 EAL: Restoring previous memory policy: 0 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was expanded by 6MB 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was shrunk by 6MB 00:08:59.347 EAL: Trying to obtain current memory policy. 00:08:59.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.347 EAL: Restoring previous memory policy: 0 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was expanded by 10MB 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was shrunk by 10MB 00:08:59.347 EAL: Trying to obtain current memory policy. 00:08:59.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.347 EAL: Restoring previous memory policy: 0 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was expanded by 18MB 00:08:59.347 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.347 EAL: request: mp_malloc_sync 00:08:59.347 EAL: No shared files mode enabled, IPC is disabled 00:08:59.347 EAL: Heap on socket 0 was shrunk by 18MB 00:08:59.606 EAL: Trying to obtain current memory policy. 00:08:59.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.606 EAL: Restoring previous memory policy: 0 00:08:59.606 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.606 EAL: request: mp_malloc_sync 00:08:59.606 EAL: No shared files mode enabled, IPC is disabled 00:08:59.606 EAL: Heap on socket 0 was expanded by 34MB 00:08:59.606 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.606 EAL: request: mp_malloc_sync 00:08:59.606 EAL: No shared files mode enabled, IPC is disabled 00:08:59.606 EAL: Heap on socket 0 was shrunk by 34MB 00:08:59.606 EAL: Trying to obtain current memory policy. 00:08:59.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.606 EAL: Restoring previous memory policy: 0 00:08:59.606 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.606 EAL: request: mp_malloc_sync 00:08:59.606 EAL: No shared files mode enabled, IPC is disabled 00:08:59.606 EAL: Heap on socket 0 was expanded by 66MB 00:08:59.864 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.864 EAL: request: mp_malloc_sync 00:08:59.864 EAL: No shared files mode enabled, IPC is disabled 00:08:59.864 EAL: Heap on socket 0 was shrunk by 66MB 00:08:59.864 EAL: Trying to obtain current memory policy. 00:08:59.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.864 EAL: Restoring previous memory policy: 0 00:08:59.864 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.864 EAL: request: mp_malloc_sync 00:08:59.864 EAL: No shared files mode enabled, IPC is disabled 00:08:59.864 EAL: Heap on socket 0 was expanded by 130MB 00:09:00.123 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.123 EAL: request: mp_malloc_sync 00:09:00.123 EAL: No shared files mode enabled, IPC is disabled 00:09:00.123 EAL: Heap on socket 0 was shrunk by 130MB 00:09:00.382 EAL: Trying to obtain current memory policy. 00:09:00.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.641 EAL: Restoring previous memory policy: 0 00:09:00.641 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.641 EAL: request: mp_malloc_sync 00:09:00.641 EAL: No shared files mode enabled, IPC is disabled 00:09:00.641 EAL: Heap on socket 0 was expanded by 258MB 00:09:00.899 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.158 EAL: request: mp_malloc_sync 00:09:01.158 EAL: No shared files mode enabled, IPC is disabled 00:09:01.158 EAL: Heap on socket 0 was shrunk by 258MB 00:09:01.416 EAL: Trying to obtain current memory policy. 00:09:01.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.675 EAL: Restoring previous memory policy: 0 00:09:01.675 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.675 EAL: request: mp_malloc_sync 00:09:01.675 EAL: No shared files mode enabled, IPC is disabled 00:09:01.675 EAL: Heap on socket 0 was expanded by 514MB 00:09:02.611 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.869 EAL: request: mp_malloc_sync 00:09:02.869 EAL: No shared files mode enabled, IPC is disabled 00:09:02.869 EAL: Heap on socket 0 was shrunk by 514MB 00:09:03.804 EAL: Trying to obtain current memory policy. 00:09:03.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:04.063 EAL: Restoring previous memory policy: 0 00:09:04.063 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.063 EAL: request: mp_malloc_sync 00:09:04.063 EAL: No shared files mode enabled, IPC is disabled 00:09:04.063 EAL: Heap on socket 0 was expanded by 1026MB 00:09:05.968 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.227 EAL: request: mp_malloc_sync 00:09:06.227 EAL: No shared files mode enabled, IPC is disabled 00:09:06.227 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:08.133 passed 00:09:08.133 00:09:08.133 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.133 suites 1 1 n/a 0 0 00:09:08.133 tests 2 2 2 0 0 00:09:08.133 asserts 6377 6377 6377 0 n/a 00:09:08.133 00:09:08.133 Elapsed time = 9.230 seconds 00:09:08.133 EAL: Calling mem event callback 'spdk:(nil)' 00:09:08.133 EAL: request: mp_malloc_sync 00:09:08.133 EAL: No shared files mode enabled, IPC is disabled 00:09:08.133 EAL: Heap on socket 0 was shrunk by 2MB 00:09:08.133 EAL: No shared files mode enabled, IPC is disabled 00:09:08.133 EAL: No shared files mode enabled, IPC is disabled 00:09:08.133 EAL: No shared files mode enabled, IPC is disabled 00:09:08.133 00:09:08.133 real 0m9.553s 00:09:08.133 user 0m8.080s 00:09:08.133 sys 0m1.345s 00:09:08.133 18:36:08 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.133 18:36:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:08.133 ************************************ 00:09:08.133 END TEST env_vtophys 00:09:08.133 ************************************ 00:09:08.133 18:36:08 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:08.133 18:36:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.133 18:36:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.133 18:36:08 env -- common/autotest_common.sh@10 -- # set +x 00:09:08.133 ************************************ 00:09:08.133 START TEST env_pci 00:09:08.133 ************************************ 00:09:08.133 18:36:08 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:08.133 00:09:08.133 00:09:08.133 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.133 http://cunit.sourceforge.net/ 00:09:08.133 00:09:08.133 00:09:08.133 Suite: pci 00:09:08.133 Test: pci_hook ...[2024-07-25 18:36:08.676242] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 111813 has claimed it 00:09:08.392 passed 00:09:08.392 00:09:08.392 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.392 suites 1 1 n/a 0 0 00:09:08.392 tests 1 1 1 0 0 00:09:08.392 asserts 25 25 25 0 n/a 00:09:08.392 00:09:08.392 Elapsed time = 0.006 seconds 00:09:08.392 EAL: Cannot find device (10000:00:01.0) 00:09:08.392 EAL: Failed to attach device on primary process 00:09:08.392 00:09:08.392 real 0m0.112s 00:09:08.392 user 0m0.052s 00:09:08.392 sys 0m0.061s 00:09:08.392 18:36:08 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.392 18:36:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:08.392 ************************************ 00:09:08.392 END TEST env_pci 00:09:08.392 ************************************ 00:09:08.392 18:36:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:08.392 18:36:08 env -- env/env.sh@15 -- # uname 00:09:08.392 18:36:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:08.392 18:36:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:08.392 18:36:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:08.392 18:36:08 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.392 18:36:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.392 18:36:08 env -- common/autotest_common.sh@10 -- # set +x 00:09:08.392 ************************************ 00:09:08.392 START TEST env_dpdk_post_init 00:09:08.392 ************************************ 00:09:08.392 18:36:08 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:08.392 EAL: Detected CPU lcores: 10 00:09:08.392 EAL: Detected NUMA nodes: 1 00:09:08.392 EAL: Detected static linkage of DPDK 00:09:08.392 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:08.392 EAL: Selected IOVA mode 'PA' 00:09:08.392 EAL: VFIO support initialized 00:09:08.651 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:08.651 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:08.651 Starting DPDK initialization... 00:09:08.651 Starting SPDK post initialization... 00:09:08.651 SPDK NVMe probe 00:09:08.651 Attaching to 0000:00:10.0 00:09:08.651 Attached to 0000:00:10.0 00:09:08.651 Cleaning up... 00:09:08.651 00:09:08.651 real 0m0.281s 00:09:08.651 user 0m0.087s 00:09:08.651 sys 0m0.096s 00:09:08.651 18:36:09 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.651 18:36:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:08.651 ************************************ 00:09:08.651 END TEST env_dpdk_post_init 00:09:08.651 ************************************ 00:09:08.651 18:36:09 env -- env/env.sh@26 -- # uname 00:09:08.651 18:36:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:08.651 18:36:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:08.651 18:36:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.651 18:36:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.651 18:36:09 env -- common/autotest_common.sh@10 -- # set +x 00:09:08.651 ************************************ 00:09:08.651 START TEST env_mem_callbacks 00:09:08.651 ************************************ 00:09:08.651 18:36:09 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:08.910 EAL: Detected CPU lcores: 10 00:09:08.910 EAL: Detected NUMA nodes: 1 00:09:08.910 EAL: Detected static linkage of DPDK 00:09:08.910 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:08.910 EAL: Selected IOVA mode 'PA' 00:09:08.910 EAL: VFIO support initialized 00:09:08.910 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:08.910 00:09:08.910 00:09:08.910 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.910 http://cunit.sourceforge.net/ 00:09:08.910 00:09:08.911 00:09:08.911 Suite: memory 00:09:08.911 Test: test ... 00:09:08.911 register 0x200000200000 2097152 00:09:08.911 malloc 3145728 00:09:08.911 register 0x200000400000 4194304 00:09:08.911 buf 0x2000004fffc0 len 3145728 PASSED 00:09:08.911 malloc 64 00:09:08.911 buf 0x2000004ffec0 len 64 PASSED 00:09:08.911 malloc 4194304 00:09:08.911 register 0x200000800000 6291456 00:09:08.911 buf 0x2000009fffc0 len 4194304 PASSED 00:09:08.911 free 0x2000004fffc0 3145728 00:09:08.911 free 0x2000004ffec0 64 00:09:08.911 unregister 0x200000400000 4194304 PASSED 00:09:08.911 free 0x2000009fffc0 4194304 00:09:08.911 unregister 0x200000800000 6291456 PASSED 00:09:08.911 malloc 8388608 00:09:08.911 register 0x200000400000 10485760 00:09:08.911 buf 0x2000005fffc0 len 8388608 PASSED 00:09:08.911 free 0x2000005fffc0 8388608 00:09:08.911 unregister 0x200000400000 10485760 PASSED 00:09:08.911 passed 00:09:08.911 00:09:08.911 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.911 suites 1 1 n/a 0 0 00:09:08.911 tests 1 1 1 0 0 00:09:08.911 asserts 15 15 15 0 n/a 00:09:08.911 00:09:08.911 Elapsed time = 0.065 seconds 00:09:09.169 00:09:09.169 real 0m0.322s 00:09:09.169 user 0m0.145s 00:09:09.169 sys 0m0.079s 00:09:09.169 18:36:09 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.169 ************************************ 00:09:09.169 END TEST env_mem_callbacks 00:09:09.169 ************************************ 00:09:09.170 18:36:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:09.170 00:09:09.170 real 0m11.089s 00:09:09.170 user 0m8.932s 00:09:09.170 sys 0m1.842s 00:09:09.170 18:36:09 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.170 18:36:09 env -- common/autotest_common.sh@10 -- # set +x 00:09:09.170 ************************************ 00:09:09.170 END TEST env 00:09:09.170 ************************************ 00:09:09.170 18:36:09 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:09.170 18:36:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.170 18:36:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.170 18:36:09 -- common/autotest_common.sh@10 -- # set +x 00:09:09.170 ************************************ 00:09:09.170 START TEST rpc 00:09:09.170 ************************************ 00:09:09.170 18:36:09 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:09.170 * Looking for test storage... 00:09:09.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:09.170 18:36:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=111945 00:09:09.170 18:36:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.170 18:36:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:09.170 18:36:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 111945 00:09:09.170 18:36:09 rpc -- common/autotest_common.sh@831 -- # '[' -z 111945 ']' 00:09:09.170 18:36:09 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.170 18:36:09 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.170 18:36:09 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.170 18:36:09 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.170 18:36:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.428 [2024-07-25 18:36:09.822995] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:09.428 [2024-07-25 18:36:09.823171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111945 ] 00:09:09.428 [2024-07-25 18:36:09.984586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.687 [2024-07-25 18:36:10.205480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:09.687 [2024-07-25 18:36:10.205561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 111945' to capture a snapshot of events at runtime. 00:09:09.687 [2024-07-25 18:36:10.205613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.687 [2024-07-25 18:36:10.205639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.687 [2024-07-25 18:36:10.205657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid111945 for offline analysis/debug. 00:09:09.687 [2024-07-25 18:36:10.205719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.623 18:36:11 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.623 18:36:11 rpc -- common/autotest_common.sh@864 -- # return 0 00:09:10.623 18:36:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:10.623 18:36:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:10.623 18:36:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:10.623 18:36:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:10.623 18:36:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.623 18:36:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.623 18:36:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.623 ************************************ 00:09:10.623 START TEST rpc_integrity 00:09:10.623 ************************************ 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.623 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:10.623 { 00:09:10.623 "name": "Malloc0", 00:09:10.623 "aliases": [ 00:09:10.623 "a57e0664-5fe6-4b52-9a33-955cc7dd026a" 00:09:10.623 ], 00:09:10.623 "product_name": "Malloc disk", 00:09:10.623 "block_size": 512, 00:09:10.623 "num_blocks": 16384, 00:09:10.623 "uuid": "a57e0664-5fe6-4b52-9a33-955cc7dd026a", 00:09:10.623 "assigned_rate_limits": { 00:09:10.623 "rw_ios_per_sec": 0, 00:09:10.623 "rw_mbytes_per_sec": 0, 00:09:10.623 "r_mbytes_per_sec": 0, 00:09:10.623 "w_mbytes_per_sec": 0 00:09:10.623 }, 00:09:10.623 "claimed": false, 00:09:10.623 "zoned": false, 00:09:10.623 "supported_io_types": { 00:09:10.623 "read": true, 00:09:10.623 "write": true, 00:09:10.623 "unmap": true, 00:09:10.623 "flush": true, 00:09:10.623 "reset": true, 00:09:10.623 "nvme_admin": false, 00:09:10.623 "nvme_io": false, 00:09:10.623 "nvme_io_md": false, 00:09:10.623 "write_zeroes": true, 00:09:10.623 "zcopy": true, 00:09:10.623 "get_zone_info": false, 00:09:10.623 "zone_management": false, 00:09:10.623 "zone_append": false, 00:09:10.623 "compare": false, 00:09:10.623 "compare_and_write": false, 00:09:10.623 "abort": true, 00:09:10.623 "seek_hole": false, 00:09:10.623 "seek_data": false, 00:09:10.623 "copy": true, 00:09:10.623 "nvme_iov_md": false 00:09:10.623 }, 00:09:10.623 "memory_domains": [ 00:09:10.623 { 00:09:10.623 "dma_device_id": "system", 00:09:10.623 "dma_device_type": 1 00:09:10.623 }, 00:09:10.623 { 00:09:10.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.623 "dma_device_type": 2 00:09:10.623 } 00:09:10.623 ], 00:09:10.623 "driver_specific": {} 00:09:10.623 } 00:09:10.623 ]' 00:09:10.623 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:10.882 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:10.882 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:10.882 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.882 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.882 [2024-07-25 18:36:11.246392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:10.882 [2024-07-25 18:36:11.246572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.882 [2024-07-25 18:36:11.246666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:10.882 [2024-07-25 18:36:11.246759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.882 [2024-07-25 18:36:11.249241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.882 [2024-07-25 18:36:11.249388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:10.883 Passthru0 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:10.883 { 00:09:10.883 "name": "Malloc0", 00:09:10.883 "aliases": [ 00:09:10.883 "a57e0664-5fe6-4b52-9a33-955cc7dd026a" 00:09:10.883 ], 00:09:10.883 "product_name": "Malloc disk", 00:09:10.883 "block_size": 512, 00:09:10.883 "num_blocks": 16384, 00:09:10.883 "uuid": "a57e0664-5fe6-4b52-9a33-955cc7dd026a", 00:09:10.883 "assigned_rate_limits": { 00:09:10.883 "rw_ios_per_sec": 0, 00:09:10.883 "rw_mbytes_per_sec": 0, 00:09:10.883 "r_mbytes_per_sec": 0, 00:09:10.883 "w_mbytes_per_sec": 0 00:09:10.883 }, 00:09:10.883 "claimed": true, 00:09:10.883 "claim_type": "exclusive_write", 00:09:10.883 "zoned": false, 00:09:10.883 "supported_io_types": { 00:09:10.883 "read": true, 00:09:10.883 "write": true, 00:09:10.883 "unmap": true, 00:09:10.883 "flush": true, 00:09:10.883 "reset": true, 00:09:10.883 "nvme_admin": false, 00:09:10.883 "nvme_io": false, 00:09:10.883 "nvme_io_md": false, 00:09:10.883 "write_zeroes": true, 00:09:10.883 "zcopy": true, 00:09:10.883 "get_zone_info": false, 00:09:10.883 "zone_management": false, 00:09:10.883 "zone_append": false, 00:09:10.883 "compare": false, 00:09:10.883 "compare_and_write": false, 00:09:10.883 "abort": true, 00:09:10.883 "seek_hole": false, 00:09:10.883 "seek_data": false, 00:09:10.883 "copy": true, 00:09:10.883 "nvme_iov_md": false 00:09:10.883 }, 00:09:10.883 "memory_domains": [ 00:09:10.883 { 00:09:10.883 "dma_device_id": "system", 00:09:10.883 "dma_device_type": 1 00:09:10.883 }, 00:09:10.883 { 00:09:10.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.883 "dma_device_type": 2 00:09:10.883 } 00:09:10.883 ], 00:09:10.883 "driver_specific": {} 00:09:10.883 }, 00:09:10.883 { 00:09:10.883 "name": "Passthru0", 00:09:10.883 "aliases": [ 00:09:10.883 "b79ac2eb-cbe4-5b6e-95ea-6028cd6a4347" 00:09:10.883 ], 00:09:10.883 "product_name": "passthru", 00:09:10.883 "block_size": 512, 00:09:10.883 "num_blocks": 16384, 00:09:10.883 "uuid": "b79ac2eb-cbe4-5b6e-95ea-6028cd6a4347", 00:09:10.883 "assigned_rate_limits": { 00:09:10.883 "rw_ios_per_sec": 0, 00:09:10.883 "rw_mbytes_per_sec": 0, 00:09:10.883 "r_mbytes_per_sec": 0, 00:09:10.883 "w_mbytes_per_sec": 0 00:09:10.883 }, 00:09:10.883 "claimed": false, 00:09:10.883 "zoned": false, 00:09:10.883 "supported_io_types": { 00:09:10.883 "read": true, 00:09:10.883 "write": true, 00:09:10.883 "unmap": true, 00:09:10.883 "flush": true, 00:09:10.883 "reset": true, 00:09:10.883 "nvme_admin": false, 00:09:10.883 "nvme_io": false, 00:09:10.883 "nvme_io_md": false, 00:09:10.883 "write_zeroes": true, 00:09:10.883 "zcopy": true, 00:09:10.883 "get_zone_info": false, 00:09:10.883 "zone_management": false, 00:09:10.883 "zone_append": false, 00:09:10.883 "compare": false, 00:09:10.883 "compare_and_write": false, 00:09:10.883 "abort": true, 00:09:10.883 "seek_hole": false, 00:09:10.883 "seek_data": false, 00:09:10.883 "copy": true, 00:09:10.883 "nvme_iov_md": false 00:09:10.883 }, 00:09:10.883 "memory_domains": [ 00:09:10.883 { 00:09:10.883 "dma_device_id": "system", 00:09:10.883 "dma_device_type": 1 00:09:10.883 }, 00:09:10.883 { 00:09:10.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.883 "dma_device_type": 2 00:09:10.883 } 00:09:10.883 ], 00:09:10.883 "driver_specific": { 00:09:10.883 "passthru": { 00:09:10.883 "name": "Passthru0", 00:09:10.883 "base_bdev_name": "Malloc0" 00:09:10.883 } 00:09:10.883 } 00:09:10.883 } 00:09:10.883 ]' 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:10.883 18:36:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:10.883 00:09:10.883 real 0m0.329s 00:09:10.883 user 0m0.194s 00:09:10.883 sys 0m0.039s 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.883 18:36:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.883 ************************************ 00:09:10.883 END TEST rpc_integrity 00:09:10.883 ************************************ 00:09:11.143 18:36:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:11.143 18:36:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.143 18:36:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.143 18:36:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 ************************************ 00:09:11.143 START TEST rpc_plugins 00:09:11.143 ************************************ 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:11.143 { 00:09:11.143 "name": "Malloc1", 00:09:11.143 "aliases": [ 00:09:11.143 "872aa48c-067e-4f6b-9eb7-79e95f8d0a89" 00:09:11.143 ], 00:09:11.143 "product_name": "Malloc disk", 00:09:11.143 "block_size": 4096, 00:09:11.143 "num_blocks": 256, 00:09:11.143 "uuid": "872aa48c-067e-4f6b-9eb7-79e95f8d0a89", 00:09:11.143 "assigned_rate_limits": { 00:09:11.143 "rw_ios_per_sec": 0, 00:09:11.143 "rw_mbytes_per_sec": 0, 00:09:11.143 "r_mbytes_per_sec": 0, 00:09:11.143 "w_mbytes_per_sec": 0 00:09:11.143 }, 00:09:11.143 "claimed": false, 00:09:11.143 "zoned": false, 00:09:11.143 "supported_io_types": { 00:09:11.143 "read": true, 00:09:11.143 "write": true, 00:09:11.143 "unmap": true, 00:09:11.143 "flush": true, 00:09:11.143 "reset": true, 00:09:11.143 "nvme_admin": false, 00:09:11.143 "nvme_io": false, 00:09:11.143 "nvme_io_md": false, 00:09:11.143 "write_zeroes": true, 00:09:11.143 "zcopy": true, 00:09:11.143 "get_zone_info": false, 00:09:11.143 "zone_management": false, 00:09:11.143 "zone_append": false, 00:09:11.143 "compare": false, 00:09:11.143 "compare_and_write": false, 00:09:11.143 "abort": true, 00:09:11.143 "seek_hole": false, 00:09:11.143 "seek_data": false, 00:09:11.143 "copy": true, 00:09:11.143 "nvme_iov_md": false 00:09:11.143 }, 00:09:11.143 "memory_domains": [ 00:09:11.143 { 00:09:11.143 "dma_device_id": "system", 00:09:11.143 "dma_device_type": 1 00:09:11.143 }, 00:09:11.143 { 00:09:11.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.143 "dma_device_type": 2 00:09:11.143 } 00:09:11.143 ], 00:09:11.143 "driver_specific": {} 00:09:11.143 } 00:09:11.143 ]' 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:11.143 18:36:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:11.143 00:09:11.143 real 0m0.149s 00:09:11.143 user 0m0.098s 00:09:11.143 sys 0m0.013s 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.143 18:36:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 ************************************ 00:09:11.143 END TEST rpc_plugins 00:09:11.143 ************************************ 00:09:11.143 18:36:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:11.143 18:36:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.143 18:36:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.143 18:36:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.143 ************************************ 00:09:11.143 START TEST rpc_trace_cmd_test 00:09:11.143 ************************************ 00:09:11.143 18:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:09:11.143 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:11.143 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:11.143 18:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.143 18:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.402 18:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.402 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:11.402 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid111945", 00:09:11.402 "tpoint_group_mask": "0x8", 00:09:11.403 "iscsi_conn": { 00:09:11.403 "mask": "0x2", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "scsi": { 00:09:11.403 "mask": "0x4", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "bdev": { 00:09:11.403 "mask": "0x8", 00:09:11.403 "tpoint_mask": "0xffffffffffffffff" 00:09:11.403 }, 00:09:11.403 "nvmf_rdma": { 00:09:11.403 "mask": "0x10", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "nvmf_tcp": { 00:09:11.403 "mask": "0x20", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "ftl": { 00:09:11.403 "mask": "0x40", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "blobfs": { 00:09:11.403 "mask": "0x80", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "dsa": { 00:09:11.403 "mask": "0x200", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "thread": { 00:09:11.403 "mask": "0x400", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "nvme_pcie": { 00:09:11.403 "mask": "0x800", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "iaa": { 00:09:11.403 "mask": "0x1000", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "nvme_tcp": { 00:09:11.403 "mask": "0x2000", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "bdev_nvme": { 00:09:11.403 "mask": "0x4000", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 }, 00:09:11.403 "sock": { 00:09:11.403 "mask": "0x8000", 00:09:11.403 "tpoint_mask": "0x0" 00:09:11.403 } 00:09:11.403 }' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:11.403 00:09:11.403 real 0m0.248s 00:09:11.403 user 0m0.211s 00:09:11.403 sys 0m0.033s 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.403 18:36:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.403 ************************************ 00:09:11.403 END TEST rpc_trace_cmd_test 00:09:11.403 ************************************ 00:09:11.662 18:36:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:11.662 18:36:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:11.662 18:36:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:11.662 18:36:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.662 18:36:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.662 18:36:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.662 ************************************ 00:09:11.662 START TEST rpc_daemon_integrity 00:09:11.662 ************************************ 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.662 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:11.662 { 00:09:11.662 "name": "Malloc2", 00:09:11.662 "aliases": [ 00:09:11.663 "acbc6393-fc74-4c99-a262-b3ce7a7a7a2f" 00:09:11.663 ], 00:09:11.663 "product_name": "Malloc disk", 00:09:11.663 "block_size": 512, 00:09:11.663 "num_blocks": 16384, 00:09:11.663 "uuid": "acbc6393-fc74-4c99-a262-b3ce7a7a7a2f", 00:09:11.663 "assigned_rate_limits": { 00:09:11.663 "rw_ios_per_sec": 0, 00:09:11.663 "rw_mbytes_per_sec": 0, 00:09:11.663 "r_mbytes_per_sec": 0, 00:09:11.663 "w_mbytes_per_sec": 0 00:09:11.663 }, 00:09:11.663 "claimed": false, 00:09:11.663 "zoned": false, 00:09:11.663 "supported_io_types": { 00:09:11.663 "read": true, 00:09:11.663 "write": true, 00:09:11.663 "unmap": true, 00:09:11.663 "flush": true, 00:09:11.663 "reset": true, 00:09:11.663 "nvme_admin": false, 00:09:11.663 "nvme_io": false, 00:09:11.663 "nvme_io_md": false, 00:09:11.663 "write_zeroes": true, 00:09:11.663 "zcopy": true, 00:09:11.663 "get_zone_info": false, 00:09:11.663 "zone_management": false, 00:09:11.663 "zone_append": false, 00:09:11.663 "compare": false, 00:09:11.663 "compare_and_write": false, 00:09:11.663 "abort": true, 00:09:11.663 "seek_hole": false, 00:09:11.663 "seek_data": false, 00:09:11.663 "copy": true, 00:09:11.663 "nvme_iov_md": false 00:09:11.663 }, 00:09:11.663 "memory_domains": [ 00:09:11.663 { 00:09:11.663 "dma_device_id": "system", 00:09:11.663 "dma_device_type": 1 00:09:11.663 }, 00:09:11.663 { 00:09:11.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.663 "dma_device_type": 2 00:09:11.663 } 00:09:11.663 ], 00:09:11.663 "driver_specific": {} 00:09:11.663 } 00:09:11.663 ]' 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.663 [2024-07-25 18:36:12.160715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:11.663 [2024-07-25 18:36:12.160888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.663 [2024-07-25 18:36:12.161008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:11.663 [2024-07-25 18:36:12.161091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.663 [2024-07-25 18:36:12.163687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.663 [2024-07-25 18:36:12.163840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:11.663 Passthru0 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:11.663 { 00:09:11.663 "name": "Malloc2", 00:09:11.663 "aliases": [ 00:09:11.663 "acbc6393-fc74-4c99-a262-b3ce7a7a7a2f" 00:09:11.663 ], 00:09:11.663 "product_name": "Malloc disk", 00:09:11.663 "block_size": 512, 00:09:11.663 "num_blocks": 16384, 00:09:11.663 "uuid": "acbc6393-fc74-4c99-a262-b3ce7a7a7a2f", 00:09:11.663 "assigned_rate_limits": { 00:09:11.663 "rw_ios_per_sec": 0, 00:09:11.663 "rw_mbytes_per_sec": 0, 00:09:11.663 "r_mbytes_per_sec": 0, 00:09:11.663 "w_mbytes_per_sec": 0 00:09:11.663 }, 00:09:11.663 "claimed": true, 00:09:11.663 "claim_type": "exclusive_write", 00:09:11.663 "zoned": false, 00:09:11.663 "supported_io_types": { 00:09:11.663 "read": true, 00:09:11.663 "write": true, 00:09:11.663 "unmap": true, 00:09:11.663 "flush": true, 00:09:11.663 "reset": true, 00:09:11.663 "nvme_admin": false, 00:09:11.663 "nvme_io": false, 00:09:11.663 "nvme_io_md": false, 00:09:11.663 "write_zeroes": true, 00:09:11.663 "zcopy": true, 00:09:11.663 "get_zone_info": false, 00:09:11.663 "zone_management": false, 00:09:11.663 "zone_append": false, 00:09:11.663 "compare": false, 00:09:11.663 "compare_and_write": false, 00:09:11.663 "abort": true, 00:09:11.663 "seek_hole": false, 00:09:11.663 "seek_data": false, 00:09:11.663 "copy": true, 00:09:11.663 "nvme_iov_md": false 00:09:11.663 }, 00:09:11.663 "memory_domains": [ 00:09:11.663 { 00:09:11.663 "dma_device_id": "system", 00:09:11.663 "dma_device_type": 1 00:09:11.663 }, 00:09:11.663 { 00:09:11.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.663 "dma_device_type": 2 00:09:11.663 } 00:09:11.663 ], 00:09:11.663 "driver_specific": {} 00:09:11.663 }, 00:09:11.663 { 00:09:11.663 "name": "Passthru0", 00:09:11.663 "aliases": [ 00:09:11.663 "f71c2261-f4a9-59bc-bdaf-87072017596b" 00:09:11.663 ], 00:09:11.663 "product_name": "passthru", 00:09:11.663 "block_size": 512, 00:09:11.663 "num_blocks": 16384, 00:09:11.663 "uuid": "f71c2261-f4a9-59bc-bdaf-87072017596b", 00:09:11.663 "assigned_rate_limits": { 00:09:11.663 "rw_ios_per_sec": 0, 00:09:11.663 "rw_mbytes_per_sec": 0, 00:09:11.663 "r_mbytes_per_sec": 0, 00:09:11.663 "w_mbytes_per_sec": 0 00:09:11.663 }, 00:09:11.663 "claimed": false, 00:09:11.663 "zoned": false, 00:09:11.663 "supported_io_types": { 00:09:11.663 "read": true, 00:09:11.663 "write": true, 00:09:11.663 "unmap": true, 00:09:11.663 "flush": true, 00:09:11.663 "reset": true, 00:09:11.663 "nvme_admin": false, 00:09:11.663 "nvme_io": false, 00:09:11.663 "nvme_io_md": false, 00:09:11.663 "write_zeroes": true, 00:09:11.663 "zcopy": true, 00:09:11.663 "get_zone_info": false, 00:09:11.663 "zone_management": false, 00:09:11.663 "zone_append": false, 00:09:11.663 "compare": false, 00:09:11.663 "compare_and_write": false, 00:09:11.663 "abort": true, 00:09:11.663 "seek_hole": false, 00:09:11.663 "seek_data": false, 00:09:11.663 "copy": true, 00:09:11.663 "nvme_iov_md": false 00:09:11.663 }, 00:09:11.663 "memory_domains": [ 00:09:11.663 { 00:09:11.663 "dma_device_id": "system", 00:09:11.663 "dma_device_type": 1 00:09:11.663 }, 00:09:11.663 { 00:09:11.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.663 "dma_device_type": 2 00:09:11.663 } 00:09:11.663 ], 00:09:11.663 "driver_specific": { 00:09:11.663 "passthru": { 00:09:11.663 "name": "Passthru0", 00:09:11.663 "base_bdev_name": "Malloc2" 00:09:11.663 } 00:09:11.663 } 00:09:11.663 } 00:09:11.663 ]' 00:09:11.663 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:11.922 00:09:11.922 real 0m0.331s 00:09:11.922 user 0m0.199s 00:09:11.922 sys 0m0.038s 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.922 18:36:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:11.922 ************************************ 00:09:11.922 END TEST rpc_daemon_integrity 00:09:11.922 ************************************ 00:09:11.922 18:36:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:11.922 18:36:12 rpc -- rpc/rpc.sh@84 -- # killprocess 111945 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@950 -- # '[' -z 111945 ']' 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@954 -- # kill -0 111945 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@955 -- # uname 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 111945 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 111945' 00:09:11.922 killing process with pid 111945 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@969 -- # kill 111945 00:09:11.922 18:36:12 rpc -- common/autotest_common.sh@974 -- # wait 111945 00:09:15.211 00:09:15.211 real 0m5.546s 00:09:15.211 user 0m6.057s 00:09:15.211 sys 0m1.038s 00:09:15.211 18:36:15 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.211 18:36:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.211 ************************************ 00:09:15.211 END TEST rpc 00:09:15.211 ************************************ 00:09:15.211 18:36:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:15.211 18:36:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.211 18:36:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.211 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:09:15.211 ************************************ 00:09:15.211 START TEST skip_rpc 00:09:15.211 ************************************ 00:09:15.211 18:36:15 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:15.211 * Looking for test storage... 00:09:15.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:15.211 18:36:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:15.211 18:36:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:15.211 18:36:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:15.211 18:36:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.211 18:36:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.211 18:36:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.211 ************************************ 00:09:15.211 START TEST skip_rpc 00:09:15.211 ************************************ 00:09:15.211 18:36:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:09:15.211 18:36:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=112199 00:09:15.211 18:36:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:15.211 18:36:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.211 18:36:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:15.211 [2024-07-25 18:36:15.450757] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:15.212 [2024-07-25 18:36:15.451469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112199 ] 00:09:15.212 [2024-07-25 18:36:15.632564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.470 [2024-07-25 18:36:15.852829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 112199 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 112199 ']' 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 112199 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112199 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112199' 00:09:20.744 killing process with pid 112199 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 112199 00:09:20.744 18:36:20 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 112199 00:09:22.648 00:09:22.648 real 0m7.765s 00:09:22.648 user 0m7.126s 00:09:22.648 sys 0m0.559s 00:09:22.648 18:36:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.648 ************************************ 00:09:22.648 18:36:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.648 END TEST skip_rpc 00:09:22.648 ************************************ 00:09:22.648 18:36:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:22.648 18:36:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.648 18:36:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.648 18:36:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.648 ************************************ 00:09:22.648 START TEST skip_rpc_with_json 00:09:22.648 ************************************ 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=112323 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 112323 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 112323 ']' 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.648 18:36:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.907 [2024-07-25 18:36:23.277133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:22.907 [2024-07-25 18:36:23.277551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112323 ] 00:09:22.907 [2024-07-25 18:36:23.436999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.165 [2024-07-25 18:36:23.669477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:24.100 [2024-07-25 18:36:24.564054] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:24.100 request: 00:09:24.100 { 00:09:24.100 "trtype": "tcp", 00:09:24.100 "method": "nvmf_get_transports", 00:09:24.100 "req_id": 1 00:09:24.100 } 00:09:24.100 Got JSON-RPC error response 00:09:24.100 response: 00:09:24.100 { 00:09:24.100 "code": -19, 00:09:24.100 "message": "No such device" 00:09:24.100 } 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:24.100 [2024-07-25 18:36:24.576171] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.100 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:24.359 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.359 18:36:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:24.359 { 00:09:24.359 "subsystems": [ 00:09:24.359 { 00:09:24.359 "subsystem": "scheduler", 00:09:24.359 "config": [ 00:09:24.359 { 00:09:24.359 "method": "framework_set_scheduler", 00:09:24.359 "params": { 00:09:24.359 "name": "static" 00:09:24.359 } 00:09:24.359 } 00:09:24.359 ] 00:09:24.359 }, 00:09:24.359 { 00:09:24.359 "subsystem": "vmd", 00:09:24.359 "config": [] 00:09:24.359 }, 00:09:24.359 { 00:09:24.359 "subsystem": "sock", 00:09:24.359 "config": [ 00:09:24.359 { 00:09:24.359 "method": "sock_set_default_impl", 00:09:24.359 "params": { 00:09:24.359 "impl_name": "posix" 00:09:24.359 } 00:09:24.359 }, 00:09:24.359 { 00:09:24.359 "method": "sock_impl_set_options", 00:09:24.359 "params": { 00:09:24.359 "impl_name": "ssl", 00:09:24.359 "recv_buf_size": 4096, 00:09:24.359 "send_buf_size": 4096, 00:09:24.359 "enable_recv_pipe": true, 00:09:24.359 "enable_quickack": false, 00:09:24.359 "enable_placement_id": 0, 00:09:24.359 "enable_zerocopy_send_server": true, 00:09:24.359 "enable_zerocopy_send_client": false, 00:09:24.359 "zerocopy_threshold": 0, 00:09:24.359 "tls_version": 0, 00:09:24.359 "enable_ktls": false 00:09:24.359 } 00:09:24.359 }, 00:09:24.359 { 00:09:24.359 "method": "sock_impl_set_options", 00:09:24.359 "params": { 00:09:24.359 "impl_name": "posix", 00:09:24.359 "recv_buf_size": 2097152, 00:09:24.359 "send_buf_size": 2097152, 00:09:24.359 "enable_recv_pipe": true, 00:09:24.359 "enable_quickack": false, 00:09:24.359 "enable_placement_id": 0, 00:09:24.359 "enable_zerocopy_send_server": true, 00:09:24.359 "enable_zerocopy_send_client": false, 00:09:24.359 "zerocopy_threshold": 0, 00:09:24.359 "tls_version": 0, 00:09:24.359 "enable_ktls": false 00:09:24.359 } 00:09:24.359 } 00:09:24.359 ] 00:09:24.359 }, 00:09:24.359 { 00:09:24.359 "subsystem": "iobuf", 00:09:24.359 "config": [ 00:09:24.359 { 00:09:24.360 "method": "iobuf_set_options", 00:09:24.360 "params": { 00:09:24.360 "small_pool_count": 8192, 00:09:24.360 "large_pool_count": 1024, 00:09:24.360 "small_bufsize": 8192, 00:09:24.360 "large_bufsize": 135168 00:09:24.360 } 00:09:24.360 } 00:09:24.360 ] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "keyring", 00:09:24.360 "config": [] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "accel", 00:09:24.360 "config": [ 00:09:24.360 { 00:09:24.360 "method": "accel_set_options", 00:09:24.360 "params": { 00:09:24.360 "small_cache_size": 128, 00:09:24.360 "large_cache_size": 16, 00:09:24.360 "task_count": 2048, 00:09:24.360 "sequence_count": 2048, 00:09:24.360 "buf_count": 2048 00:09:24.360 } 00:09:24.360 } 00:09:24.360 ] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "bdev", 00:09:24.360 "config": [ 00:09:24.360 { 00:09:24.360 "method": "bdev_set_options", 00:09:24.360 "params": { 00:09:24.360 "bdev_io_pool_size": 65535, 00:09:24.360 "bdev_io_cache_size": 256, 00:09:24.360 "bdev_auto_examine": true, 00:09:24.360 "iobuf_small_cache_size": 128, 00:09:24.360 "iobuf_large_cache_size": 16 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "bdev_raid_set_options", 00:09:24.360 "params": { 00:09:24.360 "process_window_size_kb": 1024, 00:09:24.360 "process_max_bandwidth_mb_sec": 0 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "bdev_nvme_set_options", 00:09:24.360 "params": { 00:09:24.360 "action_on_timeout": "none", 00:09:24.360 "timeout_us": 0, 00:09:24.360 "timeout_admin_us": 0, 00:09:24.360 "keep_alive_timeout_ms": 10000, 00:09:24.360 "arbitration_burst": 0, 00:09:24.360 "low_priority_weight": 0, 00:09:24.360 "medium_priority_weight": 0, 00:09:24.360 "high_priority_weight": 0, 00:09:24.360 "nvme_adminq_poll_period_us": 10000, 00:09:24.360 "nvme_ioq_poll_period_us": 0, 00:09:24.360 "io_queue_requests": 0, 00:09:24.360 "delay_cmd_submit": true, 00:09:24.360 "transport_retry_count": 4, 00:09:24.360 "bdev_retry_count": 3, 00:09:24.360 "transport_ack_timeout": 0, 00:09:24.360 "ctrlr_loss_timeout_sec": 0, 00:09:24.360 "reconnect_delay_sec": 0, 00:09:24.360 "fast_io_fail_timeout_sec": 0, 00:09:24.360 "disable_auto_failback": false, 00:09:24.360 "generate_uuids": false, 00:09:24.360 "transport_tos": 0, 00:09:24.360 "nvme_error_stat": false, 00:09:24.360 "rdma_srq_size": 0, 00:09:24.360 "io_path_stat": false, 00:09:24.360 "allow_accel_sequence": false, 00:09:24.360 "rdma_max_cq_size": 0, 00:09:24.360 "rdma_cm_event_timeout_ms": 0, 00:09:24.360 "dhchap_digests": [ 00:09:24.360 "sha256", 00:09:24.360 "sha384", 00:09:24.360 "sha512" 00:09:24.360 ], 00:09:24.360 "dhchap_dhgroups": [ 00:09:24.360 "null", 00:09:24.360 "ffdhe2048", 00:09:24.360 "ffdhe3072", 00:09:24.360 "ffdhe4096", 00:09:24.360 "ffdhe6144", 00:09:24.360 "ffdhe8192" 00:09:24.360 ] 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "bdev_nvme_set_hotplug", 00:09:24.360 "params": { 00:09:24.360 "period_us": 100000, 00:09:24.360 "enable": false 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "bdev_iscsi_set_options", 00:09:24.360 "params": { 00:09:24.360 "timeout_sec": 30 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "bdev_wait_for_examine" 00:09:24.360 } 00:09:24.360 ] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "nvmf", 00:09:24.360 "config": [ 00:09:24.360 { 00:09:24.360 "method": "nvmf_set_config", 00:09:24.360 "params": { 00:09:24.360 "discovery_filter": "match_any", 00:09:24.360 "admin_cmd_passthru": { 00:09:24.360 "identify_ctrlr": false 00:09:24.360 } 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "nvmf_set_max_subsystems", 00:09:24.360 "params": { 00:09:24.360 "max_subsystems": 1024 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "nvmf_set_crdt", 00:09:24.360 "params": { 00:09:24.360 "crdt1": 0, 00:09:24.360 "crdt2": 0, 00:09:24.360 "crdt3": 0 00:09:24.360 } 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "method": "nvmf_create_transport", 00:09:24.360 "params": { 00:09:24.360 "trtype": "TCP", 00:09:24.360 "max_queue_depth": 128, 00:09:24.360 "max_io_qpairs_per_ctrlr": 127, 00:09:24.360 "in_capsule_data_size": 4096, 00:09:24.360 "max_io_size": 131072, 00:09:24.360 "io_unit_size": 131072, 00:09:24.360 "max_aq_depth": 128, 00:09:24.360 "num_shared_buffers": 511, 00:09:24.360 "buf_cache_size": 4294967295, 00:09:24.360 "dif_insert_or_strip": false, 00:09:24.360 "zcopy": false, 00:09:24.360 "c2h_success": true, 00:09:24.360 "sock_priority": 0, 00:09:24.360 "abort_timeout_sec": 1, 00:09:24.360 "ack_timeout": 0, 00:09:24.360 "data_wr_pool_size": 0 00:09:24.360 } 00:09:24.360 } 00:09:24.360 ] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "nbd", 00:09:24.360 "config": [] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "vhost_blk", 00:09:24.360 "config": [] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "scsi", 00:09:24.360 "config": null 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "iscsi", 00:09:24.360 "config": [ 00:09:24.360 { 00:09:24.360 "method": "iscsi_set_options", 00:09:24.360 "params": { 00:09:24.360 "node_base": "iqn.2016-06.io.spdk", 00:09:24.360 "max_sessions": 128, 00:09:24.360 "max_connections_per_session": 2, 00:09:24.360 "max_queue_depth": 64, 00:09:24.360 "default_time2wait": 2, 00:09:24.360 "default_time2retain": 20, 00:09:24.360 "first_burst_length": 8192, 00:09:24.360 "immediate_data": true, 00:09:24.360 "allow_duplicated_isid": false, 00:09:24.360 "error_recovery_level": 0, 00:09:24.360 "nop_timeout": 60, 00:09:24.360 "nop_in_interval": 30, 00:09:24.360 "disable_chap": false, 00:09:24.360 "require_chap": false, 00:09:24.360 "mutual_chap": false, 00:09:24.360 "chap_group": 0, 00:09:24.360 "max_large_datain_per_connection": 64, 00:09:24.360 "max_r2t_per_connection": 4, 00:09:24.360 "pdu_pool_size": 36864, 00:09:24.360 "immediate_data_pool_size": 16384, 00:09:24.360 "data_out_pool_size": 2048 00:09:24.360 } 00:09:24.360 } 00:09:24.360 ] 00:09:24.360 }, 00:09:24.360 { 00:09:24.360 "subsystem": "vhost_scsi", 00:09:24.360 "config": [] 00:09:24.360 } 00:09:24.360 ] 00:09:24.360 } 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 112323 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 112323 ']' 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 112323 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112323 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112323' 00:09:24.360 killing process with pid 112323 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 112323 00:09:24.360 18:36:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 112323 00:09:27.646 18:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=112389 00:09:27.646 18:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:27.646 18:36:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 112389 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 112389 ']' 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 112389 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112389 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112389' 00:09:32.971 killing process with pid 112389 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 112389 00:09:32.971 18:36:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 112389 00:09:34.873 18:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:34.873 18:36:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:34.873 00:09:34.873 real 0m12.043s 00:09:34.873 user 0m11.237s 00:09:34.873 sys 0m1.221s 00:09:34.873 18:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.873 18:36:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:34.873 ************************************ 00:09:34.873 END TEST skip_rpc_with_json 00:09:34.873 ************************************ 00:09:34.873 18:36:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:34.873 18:36:35 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.873 18:36:35 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.873 18:36:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.873 ************************************ 00:09:34.873 START TEST skip_rpc_with_delay 00:09:34.873 ************************************ 00:09:34.873 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:09:34.873 18:36:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:34.873 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:34.874 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:34.874 [2024-07-25 18:36:35.415582] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:34.874 [2024-07-25 18:36:35.415987] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:35.132 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:35.132 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:35.132 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:35.132 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:35.132 00:09:35.133 real 0m0.177s 00:09:35.133 user 0m0.076s 00:09:35.133 sys 0m0.100s 00:09:35.133 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.133 18:36:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:35.133 ************************************ 00:09:35.133 END TEST skip_rpc_with_delay 00:09:35.133 ************************************ 00:09:35.133 18:36:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:35.133 18:36:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:35.133 18:36:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:35.133 18:36:35 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.133 18:36:35 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.133 18:36:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.133 ************************************ 00:09:35.133 START TEST exit_on_failed_rpc_init 00:09:35.133 ************************************ 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=112543 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 112543 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 112543 ']' 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.133 18:36:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:35.133 [2024-07-25 18:36:35.659114] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:35.133 [2024-07-25 18:36:35.659579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112543 ] 00:09:35.390 [2024-07-25 18:36:35.846096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.648 [2024-07-25 18:36:36.096705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:36.582 18:36:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:36.582 [2024-07-25 18:36:37.084422] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:36.582 [2024-07-25 18:36:37.084651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112567 ] 00:09:36.841 [2024-07-25 18:36:37.266906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.099 [2024-07-25 18:36:37.483222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.099 [2024-07-25 18:36:37.483337] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:37.099 [2024-07-25 18:36:37.483398] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:37.099 [2024-07-25 18:36:37.483441] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 112543 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 112543 ']' 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 112543 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:09:37.667 18:36:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.667 18:36:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 112543 00:09:37.667 18:36:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.667 18:36:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.667 18:36:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 112543' 00:09:37.667 killing process with pid 112543 00:09:37.667 18:36:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 112543 00:09:37.667 18:36:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 112543 00:09:40.198 00:09:40.198 real 0m5.196s 00:09:40.198 user 0m5.576s 00:09:40.198 sys 0m0.890s 00:09:40.198 18:36:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.198 18:36:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:40.198 ************************************ 00:09:40.198 END TEST exit_on_failed_rpc_init 00:09:40.198 ************************************ 00:09:40.457 18:36:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:40.457 00:09:40.457 real 0m25.583s 00:09:40.457 user 0m24.215s 00:09:40.457 sys 0m2.980s 00:09:40.457 18:36:40 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.457 18:36:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.457 ************************************ 00:09:40.457 END TEST skip_rpc 00:09:40.457 ************************************ 00:09:40.457 18:36:40 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:40.457 18:36:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.457 18:36:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.457 18:36:40 -- common/autotest_common.sh@10 -- # set +x 00:09:40.457 ************************************ 00:09:40.457 START TEST rpc_client 00:09:40.457 ************************************ 00:09:40.457 18:36:40 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:40.457 * Looking for test storage... 00:09:40.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:40.457 18:36:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:40.717 OK 00:09:40.717 18:36:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:40.717 00:09:40.717 real 0m0.193s 00:09:40.717 user 0m0.097s 00:09:40.717 sys 0m0.108s 00:09:40.717 18:36:41 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.717 18:36:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:40.717 ************************************ 00:09:40.717 END TEST rpc_client 00:09:40.717 ************************************ 00:09:40.717 18:36:41 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:40.717 18:36:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.717 18:36:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.717 18:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:40.717 ************************************ 00:09:40.717 START TEST json_config 00:09:40.717 ************************************ 00:09:40.717 18:36:41 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7e1a34d8-d6dd-424f-a046-25cbcf4cc8a7 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7e1a34d8-d6dd-424f-a046-25cbcf4cc8a7 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.717 18:36:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.717 18:36:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.717 18:36:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.717 18:36:41 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:40.717 18:36:41 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:40.717 18:36:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:40.717 18:36:41 json_config -- paths/export.sh@5 -- # export PATH 00:09:40.717 18:36:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@47 -- # : 0 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.717 18:36:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:40.717 INFO: JSON configuration test init 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:09:40.717 18:36:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.717 18:36:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:09:40.717 18:36:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.717 18:36:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:40.717 18:36:41 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:09:40.717 18:36:41 json_config -- json_config/common.sh@9 -- # local app=target 00:09:40.717 18:36:41 json_config -- json_config/common.sh@10 -- # shift 00:09:40.717 18:36:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:40.717 18:36:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:40.717 18:36:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:40.717 18:36:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:40.717 18:36:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:40.717 18:36:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=112735 00:09:40.717 18:36:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:40.717 Waiting for target to run... 00:09:40.717 18:36:41 json_config -- json_config/common.sh@25 -- # waitforlisten 112735 /var/tmp/spdk_tgt.sock 00:09:40.717 18:36:41 json_config -- common/autotest_common.sh@831 -- # '[' -z 112735 ']' 00:09:40.717 18:36:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:40.718 18:36:41 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:40.718 18:36:41 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:40.718 18:36:41 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:40.718 18:36:41 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.718 18:36:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:40.976 [2024-07-25 18:36:41.360568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:40.976 [2024-07-25 18:36:41.360838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112735 ] 00:09:41.543 [2024-07-25 18:36:41.952525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.803 [2024-07-25 18:36:42.145586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.803 18:36:42 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.803 18:36:42 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:41.803 00:09:41.803 18:36:42 json_config -- json_config/common.sh@26 -- # echo '' 00:09:41.803 18:36:42 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:09:41.803 18:36:42 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:09:41.803 18:36:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.803 18:36:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:41.803 18:36:42 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:09:41.803 18:36:42 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:09:41.803 18:36:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.803 18:36:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:41.803 18:36:42 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:41.803 18:36:42 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:09:41.803 18:36:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:42.740 18:36:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:09:42.740 18:36:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:42.740 18:36:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.740 18:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:42.740 18:36:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:42.740 18:36:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:42.740 18:36:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:42.740 18:36:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:42.740 18:36:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:42.740 18:36:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@51 -- # sort 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:09:42.996 18:36:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.996 18:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@59 -- # return 0 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:09:42.996 18:36:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.996 18:36:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:09:42.996 18:36:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:42.996 18:36:43 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:43.253 18:36:43 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:09:43.253 18:36:43 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:43.253 18:36:43 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:43.253 18:36:43 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:09:43.253 18:36:43 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:09:43.253 18:36:43 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:43.253 18:36:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:43.521 Nvme0n1p0 Nvme0n1p1 00:09:43.521 18:36:43 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:43.521 18:36:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:43.793 [2024-07-25 18:36:44.100997] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:43.793 [2024-07-25 18:36:44.101285] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:43.793 00:09:43.793 18:36:44 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:43.793 18:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:43.793 Malloc3 00:09:43.793 18:36:44 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:43.793 18:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:44.059 [2024-07-25 18:36:44.550397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:44.059 [2024-07-25 18:36:44.550653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.059 [2024-07-25 18:36:44.550802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:44.059 [2024-07-25 18:36:44.550913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.059 [2024-07-25 18:36:44.553575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.059 [2024-07-25 18:36:44.553745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:44.059 PTBdevFromMalloc3 00:09:44.059 18:36:44 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:44.059 18:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:44.318 Null0 00:09:44.318 18:36:44 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:44.318 18:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:44.576 Malloc0 00:09:44.576 18:36:44 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:44.576 18:36:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:44.576 Malloc1 00:09:44.835 18:36:45 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:44.835 18:36:45 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:45.094 102400+0 records in 00:09:45.094 102400+0 records out 00:09:45.094 104857600 bytes (105 MB, 100 MiB) copied, 0.340392 s, 308 MB/s 00:09:45.094 18:36:45 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:45.094 18:36:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:45.353 aio_disk 00:09:45.353 18:36:45 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:45.353 18:36:45 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:45.353 18:36:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:45.612 da1565df-b235-47e0-872f-9b3bf7166b4a 00:09:45.612 18:36:45 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:45.612 18:36:45 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:45.612 18:36:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:45.612 18:36:46 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:45.612 18:36:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:45.871 18:36:46 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:45.871 18:36:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:46.131 18:36:46 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:46.131 18:36:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@161 -- # [[ 0 -eq 1 ]] 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:254a9da6-6b4a-44a9-a9f5-47da6dc7b380 bdev_register:068fd60c-b050-4cd8-88b6-75cb99259792 bdev_register:53e4e729-5df8-4eb3-bcc0-b32e2fd5023d bdev_register:0a6d8fad-8fbd-42a0-bc8c-3ec0f489a79f 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:254a9da6-6b4a-44a9-a9f5-47da6dc7b380 bdev_register:068fd60c-b050-4cd8-88b6-75cb99259792 bdev_register:53e4e729-5df8-4eb3-bcc0-b32e2fd5023d bdev_register:0a6d8fad-8fbd-42a0-bc8c-3ec0f489a79f 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@75 -- # sort 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@76 -- # sort 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:09:46.391 18:36:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:46.391 18:36:46 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:254a9da6-6b4a-44a9-a9f5-47da6dc7b380 00:09:46.651 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:068fd60c-b050-4cd8-88b6-75cb99259792 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:53e4e729-5df8-4eb3-bcc0-b32e2fd5023d 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:0a6d8fad-8fbd-42a0-bc8c-3ec0f489a79f 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:068fd60c-b050-4cd8-88b6-75cb99259792 bdev_register:0a6d8fad-8fbd-42a0-bc8c-3ec0f489a79f bdev_register:254a9da6-6b4a-44a9-a9f5-47da6dc7b380 bdev_register:53e4e729-5df8-4eb3-bcc0-b32e2fd5023d bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\6\8\f\d\6\0\c\-\b\0\5\0\-\4\c\d\8\-\8\8\b\6\-\7\5\c\b\9\9\2\5\9\7\9\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\a\6\d\8\f\a\d\-\8\f\b\d\-\4\2\a\0\-\b\c\8\c\-\3\e\c\0\f\4\8\9\a\7\9\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\5\4\a\9\d\a\6\-\6\b\4\a\-\4\4\a\9\-\a\9\f\5\-\4\7\d\a\6\d\c\7\b\3\8\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\3\e\4\e\7\2\9\-\5\d\f\8\-\4\e\b\3\-\b\c\c\0\-\b\3\2\e\2\f\d\5\0\2\3\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@90 -- # cat 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:068fd60c-b050-4cd8-88b6-75cb99259792 bdev_register:0a6d8fad-8fbd-42a0-bc8c-3ec0f489a79f bdev_register:254a9da6-6b4a-44a9-a9f5-47da6dc7b380 bdev_register:53e4e729-5df8-4eb3-bcc0-b32e2fd5023d bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:09:46.652 Expected events matched: 00:09:46.652 bdev_register:068fd60c-b050-4cd8-88b6-75cb99259792 00:09:46.652 bdev_register:0a6d8fad-8fbd-42a0-bc8c-3ec0f489a79f 00:09:46.652 bdev_register:254a9da6-6b4a-44a9-a9f5-47da6dc7b380 00:09:46.652 bdev_register:53e4e729-5df8-4eb3-bcc0-b32e2fd5023d 00:09:46.652 bdev_register:Malloc0 00:09:46.652 bdev_register:Malloc0p0 00:09:46.652 bdev_register:Malloc0p1 00:09:46.652 bdev_register:Malloc0p2 00:09:46.652 bdev_register:Malloc1 00:09:46.652 bdev_register:Malloc3 00:09:46.652 bdev_register:Null0 00:09:46.652 bdev_register:Nvme0n1 00:09:46.652 bdev_register:Nvme0n1p0 00:09:46.652 bdev_register:Nvme0n1p1 00:09:46.652 bdev_register:PTBdevFromMalloc3 00:09:46.652 bdev_register:aio_disk 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:09:46.652 18:36:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.652 18:36:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:09:46.652 18:36:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.652 18:36:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:09:46.652 18:36:47 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:46.652 18:36:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:46.912 MallocBdevForConfigChangeCheck 00:09:46.912 18:36:47 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:09:46.912 18:36:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.912 18:36:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:46.912 18:36:47 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:09:46.912 18:36:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:47.480 18:36:47 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:09:47.480 INFO: shutting down applications... 00:09:47.480 18:36:47 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:09:47.480 18:36:47 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:09:47.480 18:36:47 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:09:47.481 18:36:47 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:47.481 [2024-07-25 18:36:47.930850] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:47.740 Calling clear_vhost_scsi_subsystem 00:09:47.740 Calling clear_iscsi_subsystem 00:09:47.740 Calling clear_vhost_blk_subsystem 00:09:47.740 Calling clear_nbd_subsystem 00:09:47.740 Calling clear_nvmf_subsystem 00:09:47.740 Calling clear_bdev_subsystem 00:09:47.740 18:36:48 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:47.740 18:36:48 json_config -- json_config/json_config.sh@347 -- # count=100 00:09:47.740 18:36:48 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:09:47.740 18:36:48 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:47.740 18:36:48 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:47.740 18:36:48 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:47.999 18:36:48 json_config -- json_config/json_config.sh@349 -- # break 00:09:47.999 18:36:48 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:09:47.999 18:36:48 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:09:47.999 18:36:48 json_config -- json_config/common.sh@31 -- # local app=target 00:09:47.999 18:36:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:47.999 18:36:48 json_config -- json_config/common.sh@35 -- # [[ -n 112735 ]] 00:09:47.999 18:36:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 112735 00:09:47.999 18:36:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:47.999 18:36:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:47.999 18:36:48 json_config -- json_config/common.sh@41 -- # kill -0 112735 00:09:47.999 18:36:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:48.568 18:36:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:48.568 18:36:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:48.568 18:36:49 json_config -- json_config/common.sh@41 -- # kill -0 112735 00:09:48.569 18:36:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:49.137 18:36:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:49.137 18:36:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:49.137 18:36:49 json_config -- json_config/common.sh@41 -- # kill -0 112735 00:09:49.137 18:36:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:49.706 18:36:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:49.706 18:36:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:49.706 18:36:50 json_config -- json_config/common.sh@41 -- # kill -0 112735 00:09:49.706 18:36:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:49.706 18:36:50 json_config -- json_config/common.sh@43 -- # break 00:09:49.706 18:36:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:49.706 SPDK target shutdown done 00:09:49.706 18:36:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:49.706 INFO: relaunching applications... 00:09:49.706 18:36:50 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:09:49.706 18:36:50 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:49.706 18:36:50 json_config -- json_config/common.sh@9 -- # local app=target 00:09:49.706 18:36:50 json_config -- json_config/common.sh@10 -- # shift 00:09:49.706 18:36:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:49.706 18:36:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:49.706 18:36:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:49.706 18:36:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:49.706 18:36:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:49.706 18:36:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=113007 00:09:49.706 Waiting for target to run... 00:09:49.706 18:36:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:49.706 18:36:50 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:49.706 18:36:50 json_config -- json_config/common.sh@25 -- # waitforlisten 113007 /var/tmp/spdk_tgt.sock 00:09:49.706 18:36:50 json_config -- common/autotest_common.sh@831 -- # '[' -z 113007 ']' 00:09:49.706 18:36:50 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:49.706 18:36:50 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:49.706 18:36:50 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:49.706 18:36:50 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.706 18:36:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:49.706 [2024-07-25 18:36:50.137553] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:49.706 [2024-07-25 18:36:50.138474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113007 ] 00:09:50.275 [2024-07-25 18:36:50.732562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.534 [2024-07-25 18:36:50.933883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.103 [2024-07-25 18:36:51.618318] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:51.103 [2024-07-25 18:36:51.618703] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:51.103 [2024-07-25 18:36:51.626270] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:51.103 [2024-07-25 18:36:51.626429] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:51.103 [2024-07-25 18:36:51.634295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:51.103 [2024-07-25 18:36:51.634453] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:51.103 [2024-07-25 18:36:51.634560] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:51.363 [2024-07-25 18:36:51.730252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:51.363 [2024-07-25 18:36:51.730449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.363 [2024-07-25 18:36:51.730528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:51.363 [2024-07-25 18:36:51.730627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.363 [2024-07-25 18:36:51.731176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.363 [2024-07-25 18:36:51.731327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:51.363 18:36:51 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.363 18:36:51 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:51.363 18:36:51 json_config -- json_config/common.sh@26 -- # echo '' 00:09:51.363 00:09:51.363 18:36:51 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:09:51.363 INFO: Checking if target configuration is the same... 00:09:51.363 18:36:51 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:51.363 18:36:51 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:51.363 18:36:51 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:09:51.363 18:36:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:51.363 + '[' 2 -ne 2 ']' 00:09:51.363 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:51.363 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:51.363 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:51.363 +++ basename /dev/fd/62 00:09:51.363 ++ mktemp /tmp/62.XXX 00:09:51.363 + tmp_file_1=/tmp/62.0YO 00:09:51.363 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:51.363 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:51.363 + tmp_file_2=/tmp/spdk_tgt_config.json.pfw 00:09:51.363 + ret=0 00:09:51.363 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:51.622 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:51.882 + diff -u /tmp/62.0YO /tmp/spdk_tgt_config.json.pfw 00:09:51.882 + echo 'INFO: JSON config files are the same' 00:09:51.882 INFO: JSON config files are the same 00:09:51.882 + rm /tmp/62.0YO /tmp/spdk_tgt_config.json.pfw 00:09:51.882 + exit 0 00:09:51.882 18:36:52 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:09:51.882 18:36:52 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:51.882 INFO: changing configuration and checking if this can be detected... 00:09:51.882 18:36:52 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:51.882 18:36:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:52.141 18:36:52 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:52.141 18:36:52 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:09:52.141 18:36:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:52.141 + '[' 2 -ne 2 ']' 00:09:52.141 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:52.141 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:52.141 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:52.141 +++ basename /dev/fd/62 00:09:52.141 ++ mktemp /tmp/62.XXX 00:09:52.141 + tmp_file_1=/tmp/62.2Jd 00:09:52.141 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:52.141 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:52.141 + tmp_file_2=/tmp/spdk_tgt_config.json.OSd 00:09:52.141 + ret=0 00:09:52.141 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:52.401 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:52.401 + diff -u /tmp/62.2Jd /tmp/spdk_tgt_config.json.OSd 00:09:52.401 + ret=1 00:09:52.401 + echo '=== Start of file: /tmp/62.2Jd ===' 00:09:52.401 + cat /tmp/62.2Jd 00:09:52.401 + echo '=== End of file: /tmp/62.2Jd ===' 00:09:52.401 + echo '' 00:09:52.401 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OSd ===' 00:09:52.401 + cat /tmp/spdk_tgt_config.json.OSd 00:09:52.401 + echo '=== End of file: /tmp/spdk_tgt_config.json.OSd ===' 00:09:52.401 + echo '' 00:09:52.401 + rm /tmp/62.2Jd /tmp/spdk_tgt_config.json.OSd 00:09:52.401 + exit 1 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:09:52.401 INFO: configuration change detected. 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:09:52.401 18:36:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.401 18:36:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@321 -- # [[ -n 113007 ]] 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:09:52.401 18:36:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.401 18:36:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:09:52.401 18:36:52 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:52.401 18:36:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:52.660 18:36:53 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:52.660 18:36:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:52.924 18:36:53 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:52.924 18:36:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:52.924 18:36:53 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:52.925 18:36:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:53.186 18:36:53 json_config -- json_config/json_config.sh@197 -- # uname -s 00:09:53.186 18:36:53 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:09:53.186 18:36:53 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:09:53.186 18:36:53 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:09:53.186 18:36:53 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.186 18:36:53 json_config -- json_config/json_config.sh@327 -- # killprocess 113007 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@950 -- # '[' -z 113007 ']' 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@954 -- # kill -0 113007 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@955 -- # uname 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113007 00:09:53.186 18:36:53 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.187 18:36:53 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.187 18:36:53 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113007' 00:09:53.187 killing process with pid 113007 00:09:53.187 18:36:53 json_config -- common/autotest_common.sh@969 -- # kill 113007 00:09:53.187 18:36:53 json_config -- common/autotest_common.sh@974 -- # wait 113007 00:09:54.565 18:36:55 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:54.565 18:36:55 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:09:54.565 18:36:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.565 18:36:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.565 18:36:55 json_config -- json_config/json_config.sh@332 -- # return 0 00:09:54.565 INFO: Success 00:09:54.565 18:36:55 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:09:54.565 00:09:54.565 real 0m13.963s 00:09:54.565 user 0m18.261s 00:09:54.565 sys 0m3.324s 00:09:54.565 18:36:55 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.565 18:36:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.565 ************************************ 00:09:54.565 END TEST json_config 00:09:54.565 ************************************ 00:09:54.824 18:36:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:54.824 18:36:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.824 18:36:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.824 18:36:55 -- common/autotest_common.sh@10 -- # set +x 00:09:54.824 ************************************ 00:09:54.824 START TEST json_config_extra_key 00:09:54.824 ************************************ 00:09:54.824 18:36:55 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:54.824 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f220b442-3255-4403-8627-f51b020a0fd2 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f220b442-3255-4403-8627-f51b020a0fd2 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.824 18:36:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.824 18:36:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.824 18:36:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.825 18:36:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.825 18:36:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:54.825 18:36:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:54.825 18:36:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:54.825 18:36:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:54.825 18:36:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.825 18:36:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:54.825 INFO: launching applications... 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:54.825 18:36:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=113193 00:09:54.825 Waiting for target to run... 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 113193 /var/tmp/spdk_tgt.sock 00:09:54.825 18:36:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 113193 ']' 00:09:54.825 18:36:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:54.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:54.825 18:36:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.825 18:36:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:54.825 18:36:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:54.825 18:36:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.825 18:36:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:55.084 [2024-07-25 18:36:55.398671] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:55.084 [2024-07-25 18:36:55.398963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113193 ] 00:09:55.652 [2024-07-25 18:36:56.025426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.652 [2024-07-25 18:36:56.222220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.627 18:36:56 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.627 18:36:56 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:56.627 00:09:56.627 18:36:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:56.627 INFO: shutting down applications... 00:09:56.627 18:36:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 113193 ]] 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 113193 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:56.627 18:36:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:56.886 18:36:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:56.886 18:36:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:56.886 18:36:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:56.886 18:36:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:57.454 18:36:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:57.454 18:36:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:57.454 18:36:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:57.454 18:36:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:58.022 18:36:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:58.022 18:36:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.022 18:36:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:58.022 18:36:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:58.591 18:36:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:58.591 18:36:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.591 18:36:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:58.591 18:36:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:58.851 18:36:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:58.851 18:36:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.851 18:36:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:58.851 18:36:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:59.419 18:36:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:59.419 18:36:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:59.419 18:36:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:59.419 18:36:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:59.986 18:37:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:59.986 18:37:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:59.986 18:37:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 113193 00:09:59.986 18:37:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:59.986 18:37:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:59.986 18:37:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:59.986 SPDK target shutdown done 00:09:59.986 18:37:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:59.986 Success 00:09:59.986 18:37:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:59.986 00:09:59.986 real 0m5.213s 00:09:59.986 user 0m4.481s 00:09:59.986 sys 0m0.850s 00:09:59.986 18:37:00 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.986 18:37:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:59.986 ************************************ 00:09:59.986 END TEST json_config_extra_key 00:09:59.986 ************************************ 00:09:59.986 18:37:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:59.986 18:37:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:59.986 18:37:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.986 18:37:00 -- common/autotest_common.sh@10 -- # set +x 00:09:59.986 ************************************ 00:09:59.986 START TEST alias_rpc 00:09:59.986 ************************************ 00:09:59.986 18:37:00 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:00.245 * Looking for test storage... 00:10:00.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:00.245 18:37:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:00.245 18:37:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=113314 00:10:00.245 18:37:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 113314 00:10:00.245 18:37:00 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 113314 ']' 00:10:00.245 18:37:00 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.245 18:37:00 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.245 18:37:00 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.245 18:37:00 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.245 18:37:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:00.245 18:37:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.245 [2024-07-25 18:37:00.653205] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:00.245 [2024-07-25 18:37:00.653523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113314 ] 00:10:00.245 [2024-07-25 18:37:00.817272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.502 [2024-07-25 18:37:01.036339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.436 18:37:01 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.436 18:37:01 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:01.436 18:37:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:01.695 18:37:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 113314 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 113314 ']' 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 113314 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113314 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113314' 00:10:01.695 killing process with pid 113314 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@969 -- # kill 113314 00:10:01.695 18:37:02 alias_rpc -- common/autotest_common.sh@974 -- # wait 113314 00:10:04.982 00:10:04.982 real 0m4.363s 00:10:04.982 user 0m4.225s 00:10:04.982 sys 0m0.721s 00:10:04.982 18:37:04 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.982 18:37:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.982 ************************************ 00:10:04.982 END TEST alias_rpc 00:10:04.982 ************************************ 00:10:04.982 18:37:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:04.982 18:37:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:04.982 18:37:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:04.982 18:37:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.982 18:37:04 -- common/autotest_common.sh@10 -- # set +x 00:10:04.982 ************************************ 00:10:04.982 START TEST spdkcli_tcp 00:10:04.982 ************************************ 00:10:04.982 18:37:04 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:04.982 * Looking for test storage... 00:10:04.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=113428 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 113428 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 113428 ']' 00:10:04.982 18:37:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.982 18:37:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.982 [2024-07-25 18:37:05.127961] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:04.982 [2024-07-25 18:37:05.128205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113428 ] 00:10:04.982 [2024-07-25 18:37:05.313156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.983 [2024-07-25 18:37:05.530564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.983 [2024-07-25 18:37:05.530566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.920 18:37:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.920 18:37:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:10:05.920 18:37:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=113457 00:10:05.920 18:37:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:05.920 18:37:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:06.179 [ 00:10:06.179 "spdk_get_version", 00:10:06.179 "rpc_get_methods", 00:10:06.179 "keyring_get_keys", 00:10:06.179 "trace_get_info", 00:10:06.179 "trace_get_tpoint_group_mask", 00:10:06.179 "trace_disable_tpoint_group", 00:10:06.179 "trace_enable_tpoint_group", 00:10:06.179 "trace_clear_tpoint_mask", 00:10:06.179 "trace_set_tpoint_mask", 00:10:06.179 "framework_get_pci_devices", 00:10:06.179 "framework_get_config", 00:10:06.179 "framework_get_subsystems", 00:10:06.179 "iobuf_get_stats", 00:10:06.179 "iobuf_set_options", 00:10:06.179 "sock_get_default_impl", 00:10:06.179 "sock_set_default_impl", 00:10:06.179 "sock_impl_set_options", 00:10:06.179 "sock_impl_get_options", 00:10:06.179 "vmd_rescan", 00:10:06.179 "vmd_remove_device", 00:10:06.179 "vmd_enable", 00:10:06.179 "accel_get_stats", 00:10:06.179 "accel_set_options", 00:10:06.179 "accel_set_driver", 00:10:06.179 "accel_crypto_key_destroy", 00:10:06.179 "accel_crypto_keys_get", 00:10:06.179 "accel_crypto_key_create", 00:10:06.179 "accel_assign_opc", 00:10:06.179 "accel_get_module_info", 00:10:06.179 "accel_get_opc_assignments", 00:10:06.179 "notify_get_notifications", 00:10:06.179 "notify_get_types", 00:10:06.179 "bdev_get_histogram", 00:10:06.179 "bdev_enable_histogram", 00:10:06.179 "bdev_set_qos_limit", 00:10:06.179 "bdev_set_qd_sampling_period", 00:10:06.179 "bdev_get_bdevs", 00:10:06.179 "bdev_reset_iostat", 00:10:06.179 "bdev_get_iostat", 00:10:06.179 "bdev_examine", 00:10:06.179 "bdev_wait_for_examine", 00:10:06.179 "bdev_set_options", 00:10:06.179 "scsi_get_devices", 00:10:06.179 "thread_set_cpumask", 00:10:06.179 "framework_get_governor", 00:10:06.179 "framework_get_scheduler", 00:10:06.179 "framework_set_scheduler", 00:10:06.179 "framework_get_reactors", 00:10:06.179 "thread_get_io_channels", 00:10:06.180 "thread_get_pollers", 00:10:06.180 "thread_get_stats", 00:10:06.180 "framework_monitor_context_switch", 00:10:06.180 "spdk_kill_instance", 00:10:06.180 "log_enable_timestamps", 00:10:06.180 "log_get_flags", 00:10:06.180 "log_clear_flag", 00:10:06.180 "log_set_flag", 00:10:06.180 "log_get_level", 00:10:06.180 "log_set_level", 00:10:06.180 "log_get_print_level", 00:10:06.180 "log_set_print_level", 00:10:06.180 "framework_enable_cpumask_locks", 00:10:06.180 "framework_disable_cpumask_locks", 00:10:06.180 "framework_wait_init", 00:10:06.180 "framework_start_init", 00:10:06.180 "virtio_blk_create_transport", 00:10:06.180 "virtio_blk_get_transports", 00:10:06.180 "vhost_controller_set_coalescing", 00:10:06.180 "vhost_get_controllers", 00:10:06.180 "vhost_delete_controller", 00:10:06.180 "vhost_create_blk_controller", 00:10:06.180 "vhost_scsi_controller_remove_target", 00:10:06.180 "vhost_scsi_controller_add_target", 00:10:06.180 "vhost_start_scsi_controller", 00:10:06.180 "vhost_create_scsi_controller", 00:10:06.180 "nbd_get_disks", 00:10:06.180 "nbd_stop_disk", 00:10:06.180 "nbd_start_disk", 00:10:06.180 "env_dpdk_get_mem_stats", 00:10:06.180 "nvmf_stop_mdns_prr", 00:10:06.180 "nvmf_publish_mdns_prr", 00:10:06.180 "nvmf_subsystem_get_listeners", 00:10:06.180 "nvmf_subsystem_get_qpairs", 00:10:06.180 "nvmf_subsystem_get_controllers", 00:10:06.180 "nvmf_get_stats", 00:10:06.180 "nvmf_get_transports", 00:10:06.180 "nvmf_create_transport", 00:10:06.180 "nvmf_get_targets", 00:10:06.180 "nvmf_delete_target", 00:10:06.180 "nvmf_create_target", 00:10:06.180 "nvmf_subsystem_allow_any_host", 00:10:06.180 "nvmf_subsystem_remove_host", 00:10:06.180 "nvmf_subsystem_add_host", 00:10:06.180 "nvmf_ns_remove_host", 00:10:06.180 "nvmf_ns_add_host", 00:10:06.180 "nvmf_subsystem_remove_ns", 00:10:06.180 "nvmf_subsystem_add_ns", 00:10:06.180 "nvmf_subsystem_listener_set_ana_state", 00:10:06.180 "nvmf_discovery_get_referrals", 00:10:06.180 "nvmf_discovery_remove_referral", 00:10:06.180 "nvmf_discovery_add_referral", 00:10:06.180 "nvmf_subsystem_remove_listener", 00:10:06.180 "nvmf_subsystem_add_listener", 00:10:06.180 "nvmf_delete_subsystem", 00:10:06.180 "nvmf_create_subsystem", 00:10:06.180 "nvmf_get_subsystems", 00:10:06.180 "nvmf_set_crdt", 00:10:06.180 "nvmf_set_config", 00:10:06.180 "nvmf_set_max_subsystems", 00:10:06.180 "iscsi_get_histogram", 00:10:06.180 "iscsi_enable_histogram", 00:10:06.180 "iscsi_set_options", 00:10:06.180 "iscsi_get_auth_groups", 00:10:06.180 "iscsi_auth_group_remove_secret", 00:10:06.180 "iscsi_auth_group_add_secret", 00:10:06.180 "iscsi_delete_auth_group", 00:10:06.180 "iscsi_create_auth_group", 00:10:06.180 "iscsi_set_discovery_auth", 00:10:06.180 "iscsi_get_options", 00:10:06.180 "iscsi_target_node_request_logout", 00:10:06.180 "iscsi_target_node_set_redirect", 00:10:06.180 "iscsi_target_node_set_auth", 00:10:06.180 "iscsi_target_node_add_lun", 00:10:06.180 "iscsi_get_stats", 00:10:06.180 "iscsi_get_connections", 00:10:06.180 "iscsi_portal_group_set_auth", 00:10:06.180 "iscsi_start_portal_group", 00:10:06.180 "iscsi_delete_portal_group", 00:10:06.180 "iscsi_create_portal_group", 00:10:06.180 "iscsi_get_portal_groups", 00:10:06.180 "iscsi_delete_target_node", 00:10:06.180 "iscsi_target_node_remove_pg_ig_maps", 00:10:06.180 "iscsi_target_node_add_pg_ig_maps", 00:10:06.180 "iscsi_create_target_node", 00:10:06.180 "iscsi_get_target_nodes", 00:10:06.180 "iscsi_delete_initiator_group", 00:10:06.180 "iscsi_initiator_group_remove_initiators", 00:10:06.180 "iscsi_initiator_group_add_initiators", 00:10:06.180 "iscsi_create_initiator_group", 00:10:06.180 "iscsi_get_initiator_groups", 00:10:06.180 "keyring_linux_set_options", 00:10:06.180 "keyring_file_remove_key", 00:10:06.180 "keyring_file_add_key", 00:10:06.180 "iaa_scan_accel_module", 00:10:06.180 "dsa_scan_accel_module", 00:10:06.180 "ioat_scan_accel_module", 00:10:06.180 "accel_error_inject_error", 00:10:06.180 "bdev_iscsi_delete", 00:10:06.180 "bdev_iscsi_create", 00:10:06.180 "bdev_iscsi_set_options", 00:10:06.180 "bdev_virtio_attach_controller", 00:10:06.180 "bdev_virtio_scsi_get_devices", 00:10:06.180 "bdev_virtio_detach_controller", 00:10:06.180 "bdev_virtio_blk_set_hotplug", 00:10:06.180 "bdev_ftl_set_property", 00:10:06.180 "bdev_ftl_get_properties", 00:10:06.180 "bdev_ftl_get_stats", 00:10:06.180 "bdev_ftl_unmap", 00:10:06.180 "bdev_ftl_unload", 00:10:06.180 "bdev_ftl_delete", 00:10:06.180 "bdev_ftl_load", 00:10:06.180 "bdev_ftl_create", 00:10:06.180 "bdev_aio_delete", 00:10:06.180 "bdev_aio_rescan", 00:10:06.180 "bdev_aio_create", 00:10:06.180 "blobfs_create", 00:10:06.180 "blobfs_detect", 00:10:06.180 "blobfs_set_cache_size", 00:10:06.180 "bdev_zone_block_delete", 00:10:06.180 "bdev_zone_block_create", 00:10:06.180 "bdev_delay_delete", 00:10:06.180 "bdev_delay_create", 00:10:06.180 "bdev_delay_update_latency", 00:10:06.180 "bdev_split_delete", 00:10:06.180 "bdev_split_create", 00:10:06.180 "bdev_error_inject_error", 00:10:06.180 "bdev_error_delete", 00:10:06.180 "bdev_error_create", 00:10:06.180 "bdev_raid_set_options", 00:10:06.180 "bdev_raid_remove_base_bdev", 00:10:06.180 "bdev_raid_add_base_bdev", 00:10:06.180 "bdev_raid_delete", 00:10:06.180 "bdev_raid_create", 00:10:06.180 "bdev_raid_get_bdevs", 00:10:06.180 "bdev_lvol_set_parent_bdev", 00:10:06.180 "bdev_lvol_set_parent", 00:10:06.180 "bdev_lvol_check_shallow_copy", 00:10:06.180 "bdev_lvol_start_shallow_copy", 00:10:06.180 "bdev_lvol_grow_lvstore", 00:10:06.180 "bdev_lvol_get_lvols", 00:10:06.180 "bdev_lvol_get_lvstores", 00:10:06.180 "bdev_lvol_delete", 00:10:06.180 "bdev_lvol_set_read_only", 00:10:06.180 "bdev_lvol_resize", 00:10:06.180 "bdev_lvol_decouple_parent", 00:10:06.180 "bdev_lvol_inflate", 00:10:06.180 "bdev_lvol_rename", 00:10:06.180 "bdev_lvol_clone_bdev", 00:10:06.180 "bdev_lvol_clone", 00:10:06.180 "bdev_lvol_snapshot", 00:10:06.180 "bdev_lvol_create", 00:10:06.180 "bdev_lvol_delete_lvstore", 00:10:06.180 "bdev_lvol_rename_lvstore", 00:10:06.180 "bdev_lvol_create_lvstore", 00:10:06.180 "bdev_passthru_delete", 00:10:06.180 "bdev_passthru_create", 00:10:06.180 "bdev_nvme_cuse_unregister", 00:10:06.180 "bdev_nvme_cuse_register", 00:10:06.180 "bdev_opal_new_user", 00:10:06.180 "bdev_opal_set_lock_state", 00:10:06.180 "bdev_opal_delete", 00:10:06.180 "bdev_opal_get_info", 00:10:06.180 "bdev_opal_create", 00:10:06.180 "bdev_nvme_opal_revert", 00:10:06.180 "bdev_nvme_opal_init", 00:10:06.180 "bdev_nvme_send_cmd", 00:10:06.180 "bdev_nvme_get_path_iostat", 00:10:06.180 "bdev_nvme_get_mdns_discovery_info", 00:10:06.180 "bdev_nvme_stop_mdns_discovery", 00:10:06.180 "bdev_nvme_start_mdns_discovery", 00:10:06.180 "bdev_nvme_set_multipath_policy", 00:10:06.180 "bdev_nvme_set_preferred_path", 00:10:06.180 "bdev_nvme_get_io_paths", 00:10:06.180 "bdev_nvme_remove_error_injection", 00:10:06.180 "bdev_nvme_add_error_injection", 00:10:06.180 "bdev_nvme_get_discovery_info", 00:10:06.180 "bdev_nvme_stop_discovery", 00:10:06.180 "bdev_nvme_start_discovery", 00:10:06.180 "bdev_nvme_get_controller_health_info", 00:10:06.180 "bdev_nvme_disable_controller", 00:10:06.180 "bdev_nvme_enable_controller", 00:10:06.180 "bdev_nvme_reset_controller", 00:10:06.180 "bdev_nvme_get_transport_statistics", 00:10:06.180 "bdev_nvme_apply_firmware", 00:10:06.180 "bdev_nvme_detach_controller", 00:10:06.180 "bdev_nvme_get_controllers", 00:10:06.180 "bdev_nvme_attach_controller", 00:10:06.180 "bdev_nvme_set_hotplug", 00:10:06.180 "bdev_nvme_set_options", 00:10:06.180 "bdev_null_resize", 00:10:06.180 "bdev_null_delete", 00:10:06.180 "bdev_null_create", 00:10:06.180 "bdev_malloc_delete", 00:10:06.180 "bdev_malloc_create" 00:10:06.180 ] 00:10:06.180 18:37:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.180 18:37:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:06.180 18:37:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 113428 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 113428 ']' 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 113428 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113428 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.180 18:37:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113428' 00:10:06.180 killing process with pid 113428 00:10:06.181 18:37:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 113428 00:10:06.181 18:37:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 113428 00:10:09.469 ************************************ 00:10:09.469 END TEST spdkcli_tcp 00:10:09.469 ************************************ 00:10:09.469 00:10:09.469 real 0m4.589s 00:10:09.469 user 0m7.864s 00:10:09.469 sys 0m0.875s 00:10:09.469 18:37:09 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.469 18:37:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.469 18:37:09 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:09.469 18:37:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:09.469 18:37:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.469 18:37:09 -- common/autotest_common.sh@10 -- # set +x 00:10:09.469 ************************************ 00:10:09.469 START TEST dpdk_mem_utility 00:10:09.469 ************************************ 00:10:09.469 18:37:09 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:09.469 * Looking for test storage... 00:10:09.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:09.469 18:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:09.469 18:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=113563 00:10:09.469 18:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 113563 00:10:09.469 18:37:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:09.469 18:37:09 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 113563 ']' 00:10:09.469 18:37:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.469 18:37:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.469 18:37:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.469 18:37:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.469 18:37:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:09.469 [2024-07-25 18:37:09.761540] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:09.470 [2024-07-25 18:37:09.761823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113563 ] 00:10:09.470 [2024-07-25 18:37:09.944121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.765 [2024-07-25 18:37:10.159272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.703 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.703 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:10:10.703 18:37:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:10.703 18:37:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:10.703 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.703 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:10.703 { 00:10:10.703 "filename": "/tmp/spdk_mem_dump.txt" 00:10:10.703 } 00:10:10.703 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.703 18:37:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:10.703 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:10.703 1 heaps totaling size 820.000000 MiB 00:10:10.703 size: 820.000000 MiB heap id: 0 00:10:10.703 end heaps---------- 00:10:10.703 8 mempools totaling size 598.116089 MiB 00:10:10.703 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:10.703 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:10.703 size: 84.521057 MiB name: bdev_io_113563 00:10:10.703 size: 51.011292 MiB name: evtpool_113563 00:10:10.703 size: 50.003479 MiB name: msgpool_113563 00:10:10.703 size: 21.763794 MiB name: PDU_Pool 00:10:10.703 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:10.703 size: 0.026123 MiB name: Session_Pool 00:10:10.703 end mempools------- 00:10:10.703 6 memzones totaling size 4.142822 MiB 00:10:10.703 size: 1.000366 MiB name: RG_ring_0_113563 00:10:10.703 size: 1.000366 MiB name: RG_ring_1_113563 00:10:10.703 size: 1.000366 MiB name: RG_ring_4_113563 00:10:10.703 size: 1.000366 MiB name: RG_ring_5_113563 00:10:10.703 size: 0.125366 MiB name: RG_ring_2_113563 00:10:10.703 size: 0.015991 MiB name: RG_ring_3_113563 00:10:10.703 end memzones------- 00:10:10.703 18:37:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:10.703 heap id: 0 total size: 820.000000 MiB number of busy elements: 225 number of free elements: 18 00:10:10.703 list of free elements. size: 18.469971 MiB 00:10:10.703 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:10.703 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:10.703 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:10.703 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:10.703 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:10.703 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:10.703 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:10.703 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:10.703 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:10.703 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:10.703 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:10.703 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:10.703 element at address: 0x20001b000000 with size: 0.561462 MiB 00:10:10.703 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:10.703 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:10.703 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:10.703 element at address: 0x200028400000 with size: 0.399719 MiB 00:10:10.703 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:10.703 list of standard malloc elements. size: 199.265625 MiB 00:10:10.703 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:10.703 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:10.703 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:10.703 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:10.703 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:10.703 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:10.703 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:10.703 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:10.703 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:10.703 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:10.703 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:10.703 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:10.703 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:10.703 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:10.703 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:10.703 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:10.703 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:10.703 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:10.704 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:10.704 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:10.704 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:10.704 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:10.704 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846d300 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:10.704 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:10.704 list of memzone associated elements. size: 602.264404 MiB 00:10:10.704 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:10.704 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:10.704 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:10.705 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:10.705 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:10.705 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_113563_0 00:10:10.705 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:10.705 associated memzone info: size: 48.002930 MiB name: MP_evtpool_113563_0 00:10:10.705 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:10.705 associated memzone info: size: 48.002930 MiB name: MP_msgpool_113563_0 00:10:10.705 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:10.705 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:10.705 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:10.705 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:10.705 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:10.705 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_113563 00:10:10.705 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:10.705 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_113563 00:10:10.705 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:10.705 associated memzone info: size: 1.007996 MiB name: MP_evtpool_113563 00:10:10.705 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:10.705 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:10.705 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:10.705 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:10.705 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:10.705 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:10.705 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:10.705 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:10.705 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:10.705 associated memzone info: size: 1.000366 MiB name: RG_ring_0_113563 00:10:10.705 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:10.705 associated memzone info: size: 1.000366 MiB name: RG_ring_1_113563 00:10:10.705 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:10.705 associated memzone info: size: 1.000366 MiB name: RG_ring_4_113563 00:10:10.705 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:10.705 associated memzone info: size: 1.000366 MiB name: RG_ring_5_113563 00:10:10.705 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:10.705 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_113563 00:10:10.705 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:10.705 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:10.705 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:10.705 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:10.705 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:10.705 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:10.705 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:10.705 associated memzone info: size: 0.125366 MiB name: RG_ring_2_113563 00:10:10.705 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:10.705 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:10.705 element at address: 0x200028466740 with size: 0.023804 MiB 00:10:10.705 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:10.705 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:10.705 associated memzone info: size: 0.015991 MiB name: RG_ring_3_113563 00:10:10.705 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:10:10.705 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:10.705 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:10.705 associated memzone info: size: 0.000183 MiB name: MP_msgpool_113563 00:10:10.705 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:10.705 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_113563 00:10:10.705 element at address: 0x20002846d400 with size: 0.000366 MiB 00:10:10.705 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:10.705 18:37:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:10.705 18:37:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 113563 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 113563 ']' 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 113563 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113563 00:10:10.705 killing process with pid 113563 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113563' 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 113563 00:10:10.705 18:37:11 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 113563 00:10:13.994 ************************************ 00:10:13.994 END TEST dpdk_mem_utility 00:10:13.994 ************************************ 00:10:13.994 00:10:13.994 real 0m4.408s 00:10:13.994 user 0m4.166s 00:10:13.994 sys 0m0.726s 00:10:13.994 18:37:13 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.994 18:37:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:13.994 18:37:14 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:13.994 18:37:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.994 18:37:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.994 18:37:14 -- common/autotest_common.sh@10 -- # set +x 00:10:13.994 ************************************ 00:10:13.994 START TEST event 00:10:13.994 ************************************ 00:10:13.994 18:37:14 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:13.994 * Looking for test storage... 00:10:13.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:13.994 18:37:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:13.994 18:37:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:13.994 18:37:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:13.994 18:37:14 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:13.994 18:37:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.994 18:37:14 event -- common/autotest_common.sh@10 -- # set +x 00:10:13.994 ************************************ 00:10:13.994 START TEST event_perf 00:10:13.994 ************************************ 00:10:13.994 18:37:14 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:13.994 Running I/O for 1 seconds...[2024-07-25 18:37:14.214680] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:13.994 [2024-07-25 18:37:14.215645] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113677 ] 00:10:13.994 [2024-07-25 18:37:14.417877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.253 [2024-07-25 18:37:14.642035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.253 [2024-07-25 18:37:14.642207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.253 [2024-07-25 18:37:14.642368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.253 Running I/O for 1 seconds...[2024-07-25 18:37:14.642376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.631 00:10:15.631 lcore 0: 202156 00:10:15.631 lcore 1: 202156 00:10:15.631 lcore 2: 202155 00:10:15.631 lcore 3: 202156 00:10:15.631 done. 00:10:15.631 00:10:15.631 real 0m1.960s 00:10:15.631 user 0m4.681s 00:10:15.631 sys 0m0.176s 00:10:15.631 18:37:16 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.631 ************************************ 00:10:15.631 END TEST event_perf 00:10:15.631 ************************************ 00:10:15.631 18:37:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 18:37:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:15.631 18:37:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:15.631 18:37:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.631 18:37:16 event -- common/autotest_common.sh@10 -- # set +x 00:10:15.631 ************************************ 00:10:15.631 START TEST event_reactor 00:10:15.631 ************************************ 00:10:15.631 18:37:16 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:15.891 [2024-07-25 18:37:16.226747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:15.891 [2024-07-25 18:37:16.226921] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113730 ] 00:10:15.891 [2024-07-25 18:37:16.392754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.150 [2024-07-25 18:37:16.645619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.053 test_start 00:10:18.053 oneshot 00:10:18.053 tick 100 00:10:18.053 tick 100 00:10:18.053 tick 250 00:10:18.053 tick 100 00:10:18.053 tick 100 00:10:18.053 tick 100 00:10:18.053 tick 250 00:10:18.053 tick 500 00:10:18.053 tick 100 00:10:18.053 tick 100 00:10:18.053 tick 250 00:10:18.053 tick 100 00:10:18.053 tick 100 00:10:18.053 test_end 00:10:18.053 00:10:18.053 real 0m1.947s 00:10:18.053 user 0m1.710s 00:10:18.053 sys 0m0.137s 00:10:18.053 18:37:18 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.053 18:37:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:18.053 ************************************ 00:10:18.053 END TEST event_reactor 00:10:18.053 ************************************ 00:10:18.053 18:37:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:18.053 18:37:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:18.053 18:37:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.053 18:37:18 event -- common/autotest_common.sh@10 -- # set +x 00:10:18.053 ************************************ 00:10:18.053 START TEST event_reactor_perf 00:10:18.053 ************************************ 00:10:18.053 18:37:18 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:18.053 [2024-07-25 18:37:18.251639] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:18.053 [2024-07-25 18:37:18.251884] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113784 ] 00:10:18.053 [2024-07-25 18:37:18.441483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.311 [2024-07-25 18:37:18.724266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.687 test_start 00:10:19.687 test_end 00:10:19.687 Performance: 436231 events per second 00:10:19.687 00:10:19.687 real 0m2.001s 00:10:19.687 user 0m1.725s 00:10:19.687 sys 0m0.176s 00:10:19.687 18:37:20 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.687 ************************************ 00:10:19.687 END TEST event_reactor_perf 00:10:19.687 ************************************ 00:10:19.687 18:37:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:19.687 18:37:20 event -- event/event.sh@49 -- # uname -s 00:10:19.946 18:37:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:19.946 18:37:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:19.946 18:37:20 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.946 18:37:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.946 18:37:20 event -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 ************************************ 00:10:19.946 START TEST event_scheduler 00:10:19.946 ************************************ 00:10:19.946 18:37:20 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:19.946 * Looking for test storage... 00:10:19.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:19.946 18:37:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:19.946 18:37:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=113862 00:10:19.946 18:37:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:19.946 18:37:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 113862 00:10:19.946 18:37:20 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 113862 ']' 00:10:19.946 18:37:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:19.946 18:37:20 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.946 18:37:20 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.946 18:37:20 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.946 18:37:20 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.946 18:37:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 [2024-07-25 18:37:20.512694] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:19.946 [2024-07-25 18:37:20.512932] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113862 ] 00:10:20.205 [2024-07-25 18:37:20.726861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.464 [2024-07-25 18:37:21.037229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.464 [2024-07-25 18:37:21.037449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.464 [2024-07-25 18:37:21.037600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.464 [2024-07-25 18:37:21.037607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.030 18:37:21 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.030 18:37:21 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:10:21.030 18:37:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:21.030 18:37:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.030 18:37:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:21.030 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:21.030 POWER: Cannot set governor of lcore 0 to userspace 00:10:21.030 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:21.030 POWER: Cannot set governor of lcore 0 to performance 00:10:21.030 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:21.030 POWER: Cannot set governor of lcore 0 to userspace 00:10:21.030 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:21.030 POWER: Cannot set governor of lcore 0 to userspace 00:10:21.030 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:21.031 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:21.031 POWER: Unable to set Power Management Environment for lcore 0 00:10:21.031 [2024-07-25 18:37:21.443921] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:21.031 [2024-07-25 18:37:21.443968] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:21.031 [2024-07-25 18:37:21.444000] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:21.031 [2024-07-25 18:37:21.444028] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:21.031 [2024-07-25 18:37:21.444058] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:21.031 [2024-07-25 18:37:21.444081] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:21.031 18:37:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.031 18:37:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:21.031 18:37:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.031 18:37:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:21.289 [2024-07-25 18:37:21.801908] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:21.289 18:37:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.289 18:37:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:21.289 18:37:21 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:21.289 18:37:21 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.289 18:37:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:21.289 ************************************ 00:10:21.289 START TEST scheduler_create_thread 00:10:21.289 ************************************ 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.289 2 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.289 3 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.289 4 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.289 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.548 5 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.548 6 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.548 7 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.548 8 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.548 9 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:21.548 10 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.548 18:37:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:22.952 18:37:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.952 18:37:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:22.952 18:37:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:22.952 18:37:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.952 18:37:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:23.519 18:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.519 18:37:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:23.519 18:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.519 18:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 18:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.455 18:37:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:24.455 18:37:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:24.455 18:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.455 18:37:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:25.393 ************************************ 00:10:25.393 END TEST scheduler_create_thread 00:10:25.393 ************************************ 00:10:25.393 18:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.393 00:10:25.393 real 0m3.902s 00:10:25.393 user 0m0.018s 00:10:25.393 sys 0m0.010s 00:10:25.393 18:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.393 18:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:25.393 18:37:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:25.393 18:37:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 113862 00:10:25.393 18:37:25 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 113862 ']' 00:10:25.393 18:37:25 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 113862 00:10:25.393 18:37:25 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:10:25.393 18:37:25 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:25.393 18:37:25 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 113862 00:10:25.393 killing process with pid 113862 00:10:25.394 18:37:25 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:25.394 18:37:25 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:25.394 18:37:25 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 113862' 00:10:25.394 18:37:25 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 113862 00:10:25.394 18:37:25 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 113862 00:10:25.652 [2024-07-25 18:37:26.104053] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:27.031 00:10:27.031 real 0m7.306s 00:10:27.031 user 0m14.069s 00:10:27.031 sys 0m0.631s 00:10:27.031 18:37:27 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.031 18:37:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:27.031 ************************************ 00:10:27.031 END TEST event_scheduler 00:10:27.031 ************************************ 00:10:27.289 18:37:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:27.289 18:37:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:27.289 18:37:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:27.289 18:37:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.289 18:37:27 event -- common/autotest_common.sh@10 -- # set +x 00:10:27.289 ************************************ 00:10:27.289 START TEST app_repeat 00:10:27.289 ************************************ 00:10:27.289 18:37:27 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=114005 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:27.289 Process app_repeat pid: 114005 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 114005' 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:27.289 spdk_app_start Round 0 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:27.289 18:37:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114005 /var/tmp/spdk-nbd.sock 00:10:27.289 18:37:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114005 ']' 00:10:27.289 18:37:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:27.289 18:37:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:27.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:27.289 18:37:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:27.289 18:37:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:27.289 18:37:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:27.289 [2024-07-25 18:37:27.741715] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:27.289 [2024-07-25 18:37:27.742000] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114005 ] 00:10:27.548 [2024-07-25 18:37:27.935050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:27.807 [2024-07-25 18:37:28.179320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.807 [2024-07-25 18:37:28.179335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.375 18:37:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.375 18:37:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:28.375 18:37:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:28.633 Malloc0 00:10:28.633 18:37:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:28.892 Malloc1 00:10:28.892 18:37:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.892 18:37:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:29.150 /dev/nbd0 00:10:29.150 18:37:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:29.150 18:37:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:29.150 1+0 records in 00:10:29.150 1+0 records out 00:10:29.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347304 s, 11.8 MB/s 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:29.150 18:37:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:29.150 18:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.150 18:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:29.150 18:37:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:29.409 /dev/nbd1 00:10:29.409 18:37:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:29.409 18:37:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:29.409 1+0 records in 00:10:29.409 1+0 records out 00:10:29.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348005 s, 11.8 MB/s 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:29.409 18:37:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:29.409 18:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.409 18:37:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:29.409 18:37:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:29.409 18:37:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.409 18:37:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:29.668 { 00:10:29.668 "nbd_device": "/dev/nbd0", 00:10:29.668 "bdev_name": "Malloc0" 00:10:29.668 }, 00:10:29.668 { 00:10:29.668 "nbd_device": "/dev/nbd1", 00:10:29.668 "bdev_name": "Malloc1" 00:10:29.668 } 00:10:29.668 ]' 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:29.668 { 00:10:29.668 "nbd_device": "/dev/nbd0", 00:10:29.668 "bdev_name": "Malloc0" 00:10:29.668 }, 00:10:29.668 { 00:10:29.668 "nbd_device": "/dev/nbd1", 00:10:29.668 "bdev_name": "Malloc1" 00:10:29.668 } 00:10:29.668 ]' 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:29.668 /dev/nbd1' 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:29.668 /dev/nbd1' 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:29.668 256+0 records in 00:10:29.668 256+0 records out 00:10:29.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00935244 s, 112 MB/s 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.668 18:37:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:29.927 256+0 records in 00:10:29.927 256+0 records out 00:10:29.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264236 s, 39.7 MB/s 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:29.927 256+0 records in 00:10:29.927 256+0 records out 00:10:29.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262242 s, 40.0 MB/s 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.927 18:37:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:30.185 18:37:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:30.185 18:37:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:30.185 18:37:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:30.185 18:37:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.185 18:37:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.186 18:37:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:30.186 18:37:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:30.186 18:37:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.186 18:37:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.186 18:37:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.444 18:37:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:30.702 18:37:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:30.702 18:37:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:31.269 18:37:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:33.172 [2024-07-25 18:37:33.260834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:33.172 [2024-07-25 18:37:33.483595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.172 [2024-07-25 18:37:33.483603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.172 [2024-07-25 18:37:33.723633] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:33.172 [2024-07-25 18:37:33.723737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:34.145 18:37:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:34.145 spdk_app_start Round 1 00:10:34.145 18:37:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:34.145 18:37:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114005 /var/tmp/spdk-nbd.sock 00:10:34.145 18:37:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114005 ']' 00:10:34.145 18:37:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:34.145 18:37:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:34.145 18:37:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:34.145 18:37:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.145 18:37:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 18:37:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.420 18:37:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:34.420 18:37:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:34.678 Malloc0 00:10:34.678 18:37:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:35.246 Malloc1 00:10:35.246 18:37:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:35.246 /dev/nbd0 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:35.246 18:37:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:35.246 18:37:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:35.246 18:37:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:35.247 1+0 records in 00:10:35.247 1+0 records out 00:10:35.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213919 s, 19.1 MB/s 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:35.247 18:37:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:35.247 18:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.247 18:37:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:35.247 18:37:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:35.506 /dev/nbd1 00:10:35.764 18:37:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:35.764 18:37:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:35.764 18:37:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:35.764 18:37:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:35.764 18:37:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:35.765 1+0 records in 00:10:35.765 1+0 records out 00:10:35.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488082 s, 8.4 MB/s 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:35.765 18:37:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:35.765 18:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.765 18:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:35.765 18:37:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.765 18:37:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.765 18:37:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:36.022 { 00:10:36.022 "nbd_device": "/dev/nbd0", 00:10:36.022 "bdev_name": "Malloc0" 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "nbd_device": "/dev/nbd1", 00:10:36.022 "bdev_name": "Malloc1" 00:10:36.022 } 00:10:36.022 ]' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:36.022 { 00:10:36.022 "nbd_device": "/dev/nbd0", 00:10:36.022 "bdev_name": "Malloc0" 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "nbd_device": "/dev/nbd1", 00:10:36.022 "bdev_name": "Malloc1" 00:10:36.022 } 00:10:36.022 ]' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:36.022 /dev/nbd1' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:36.022 /dev/nbd1' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:36.022 256+0 records in 00:10:36.022 256+0 records out 00:10:36.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0095405 s, 110 MB/s 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:36.022 256+0 records in 00:10:36.022 256+0 records out 00:10:36.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257497 s, 40.7 MB/s 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:36.022 256+0 records in 00:10:36.022 256+0 records out 00:10:36.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288356 s, 36.4 MB/s 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:36.022 18:37:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.023 18:37:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.589 18:37:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:36.589 18:37:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:36.589 18:37:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:36.589 18:37:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:36.589 18:37:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.589 18:37:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.589 18:37:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:36.847 18:37:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:36.847 18:37:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.847 18:37:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:36.847 18:37:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.847 18:37:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:37.106 18:37:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:37.106 18:37:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:37.674 18:37:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:39.049 [2024-07-25 18:37:39.572096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.308 [2024-07-25 18:37:39.786141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.308 [2024-07-25 18:37:39.786145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.567 [2024-07-25 18:37:40.023199] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:39.567 [2024-07-25 18:37:40.023346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:40.504 18:37:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:40.504 spdk_app_start Round 2 00:10:40.504 18:37:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:40.504 18:37:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 114005 /var/tmp/spdk-nbd.sock 00:10:40.504 18:37:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114005 ']' 00:10:40.504 18:37:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:40.504 18:37:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:40.504 18:37:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:40.504 18:37:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.504 18:37:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:40.763 18:37:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.763 18:37:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:40.763 18:37:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:41.021 Malloc0 00:10:41.021 18:37:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:41.280 Malloc1 00:10:41.280 18:37:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.280 18:37:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:41.539 /dev/nbd0 00:10:41.539 18:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:41.539 18:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:41.539 18:37:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:41.539 1+0 records in 00:10:41.539 1+0 records out 00:10:41.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310105 s, 13.2 MB/s 00:10:41.540 18:37:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:41.540 18:37:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:41.540 18:37:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:41.540 18:37:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:41.540 18:37:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:41.540 18:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.540 18:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.540 18:37:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:41.798 /dev/nbd1 00:10:42.058 18:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:42.058 18:37:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:42.058 1+0 records in 00:10:42.058 1+0 records out 00:10:42.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286792 s, 14.3 MB/s 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:42.058 18:37:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:10:42.058 18:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:42.058 18:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:42.058 18:37:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:42.058 18:37:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.058 18:37:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:42.317 18:37:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:42.317 { 00:10:42.317 "nbd_device": "/dev/nbd0", 00:10:42.317 "bdev_name": "Malloc0" 00:10:42.317 }, 00:10:42.318 { 00:10:42.318 "nbd_device": "/dev/nbd1", 00:10:42.318 "bdev_name": "Malloc1" 00:10:42.318 } 00:10:42.318 ]' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:42.318 { 00:10:42.318 "nbd_device": "/dev/nbd0", 00:10:42.318 "bdev_name": "Malloc0" 00:10:42.318 }, 00:10:42.318 { 00:10:42.318 "nbd_device": "/dev/nbd1", 00:10:42.318 "bdev_name": "Malloc1" 00:10:42.318 } 00:10:42.318 ]' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:42.318 /dev/nbd1' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:42.318 /dev/nbd1' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:42.318 256+0 records in 00:10:42.318 256+0 records out 00:10:42.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119707 s, 87.6 MB/s 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:42.318 256+0 records in 00:10:42.318 256+0 records out 00:10:42.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270877 s, 38.7 MB/s 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:42.318 256+0 records in 00:10:42.318 256+0 records out 00:10:42.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271862 s, 38.6 MB/s 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.318 18:37:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.577 18:37:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:42.836 18:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.094 18:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:43.353 18:37:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:43.353 18:37:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:43.920 18:37:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:45.298 [2024-07-25 18:37:45.807063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.619 [2024-07-25 18:37:46.026161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.619 [2024-07-25 18:37:46.026165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.878 [2024-07-25 18:37:46.264677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:45.878 [2024-07-25 18:37:46.264795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:46.814 18:37:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 114005 /var/tmp/spdk-nbd.sock 00:10:46.814 18:37:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 114005 ']' 00:10:46.814 18:37:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:46.814 18:37:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:46.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:46.814 18:37:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:46.814 18:37:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:46.814 18:37:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:47.073 18:37:47 event.app_repeat -- event/event.sh@39 -- # killprocess 114005 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 114005 ']' 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 114005 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114005 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.073 killing process with pid 114005 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114005' 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@969 -- # kill 114005 00:10:47.073 18:37:47 event.app_repeat -- common/autotest_common.sh@974 -- # wait 114005 00:10:48.450 spdk_app_start is called in Round 0. 00:10:48.450 Shutdown signal received, stop current app iteration 00:10:48.450 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:10:48.450 spdk_app_start is called in Round 1. 00:10:48.450 Shutdown signal received, stop current app iteration 00:10:48.450 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:10:48.450 spdk_app_start is called in Round 2. 00:10:48.450 Shutdown signal received, stop current app iteration 00:10:48.450 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:10:48.450 spdk_app_start is called in Round 3. 00:10:48.450 Shutdown signal received, stop current app iteration 00:10:48.450 18:37:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:48.450 18:37:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:48.450 00:10:48.450 real 0m21.264s 00:10:48.450 user 0m44.018s 00:10:48.450 sys 0m3.831s 00:10:48.450 18:37:48 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.450 18:37:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:48.450 ************************************ 00:10:48.450 END TEST app_repeat 00:10:48.450 ************************************ 00:10:48.450 18:37:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:48.450 18:37:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:48.450 18:37:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:48.450 18:37:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.450 18:37:48 event -- common/autotest_common.sh@10 -- # set +x 00:10:48.450 ************************************ 00:10:48.450 START TEST cpu_locks 00:10:48.450 ************************************ 00:10:48.450 18:37:49 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:48.709 * Looking for test storage... 00:10:48.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:48.709 18:37:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:48.709 18:37:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:48.709 18:37:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:48.709 18:37:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:48.709 18:37:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:48.709 18:37:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.709 18:37:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:48.709 ************************************ 00:10:48.709 START TEST default_locks 00:10:48.709 ************************************ 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=114544 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 114544 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 114544 ']' 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.709 18:37:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:48.709 [2024-07-25 18:37:49.225145] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:48.709 [2024-07-25 18:37:49.226069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114544 ] 00:10:48.969 [2024-07-25 18:37:49.419602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.227 [2024-07-25 18:37:49.638845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.164 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.164 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:10:50.164 18:37:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 114544 00:10:50.164 18:37:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 114544 00:10:50.164 18:37:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 114544 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 114544 ']' 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 114544 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114544 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.423 killing process with pid 114544 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114544' 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 114544 00:10:50.423 18:37:50 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 114544 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 114544 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 114544 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 114544 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 114544 ']' 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:53.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (114544) - No such process 00:10:53.723 ERROR: process (pid: 114544) is no longer running 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:53.723 00:10:53.723 real 0m4.568s 00:10:53.723 user 0m4.493s 00:10:53.723 sys 0m0.837s 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.723 18:37:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:53.723 ************************************ 00:10:53.723 END TEST default_locks 00:10:53.723 ************************************ 00:10:53.723 18:37:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:53.723 18:37:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.723 18:37:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.723 18:37:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:53.723 ************************************ 00:10:53.723 START TEST default_locks_via_rpc 00:10:53.723 ************************************ 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=114636 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 114636 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 114636 ']' 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.723 18:37:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.723 [2024-07-25 18:37:53.859999] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:53.723 [2024-07-25 18:37:53.860227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114636 ] 00:10:53.723 [2024-07-25 18:37:54.040173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.723 [2024-07-25 18:37:54.256156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 114636 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 114636 00:10:54.659 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 114636 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 114636 ']' 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 114636 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114636 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.226 killing process with pid 114636 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114636' 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 114636 00:10:55.226 18:37:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 114636 00:10:57.759 00:10:57.759 real 0m4.483s 00:10:57.759 user 0m4.382s 00:10:57.759 sys 0m0.854s 00:10:57.759 18:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.759 18:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.759 ************************************ 00:10:57.759 END TEST default_locks_via_rpc 00:10:57.759 ************************************ 00:10:57.759 18:37:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:57.759 18:37:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:57.759 18:37:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.759 18:37:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:57.759 ************************************ 00:10:57.759 START TEST non_locking_app_on_locked_coremask 00:10:57.759 ************************************ 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=114723 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 114723 /var/tmp/spdk.sock 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114723 ']' 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.759 18:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:58.018 [2024-07-25 18:37:58.405517] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:58.018 [2024-07-25 18:37:58.405700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114723 ] 00:10:58.018 [2024-07-25 18:37:58.567946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.277 [2024-07-25 18:37:58.787291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=114751 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 114751 /var/tmp/spdk2.sock 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114751 ']' 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.214 18:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:59.214 [2024-07-25 18:37:59.772568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:59.214 [2024-07-25 18:37:59.773105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114751 ] 00:10:59.489 [2024-07-25 18:37:59.948277] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:59.489 [2024-07-25 18:37:59.948350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.055 [2024-07-25 18:38:00.379703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.957 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.957 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:01.957 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 114723 00:11:01.957 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114723 00:11:01.957 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 114723 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114723 ']' 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 114723 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114723 00:11:02.525 killing process with pid 114723 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114723' 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 114723 00:11:02.525 18:38:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 114723 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 114751 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114751 ']' 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 114751 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114751 00:11:09.092 killing process with pid 114751 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114751' 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 114751 00:11:09.092 18:38:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 114751 00:11:10.996 ************************************ 00:11:10.996 END TEST non_locking_app_on_locked_coremask 00:11:10.996 ************************************ 00:11:10.996 00:11:10.996 real 0m12.810s 00:11:10.996 user 0m12.955s 00:11:10.996 sys 0m1.682s 00:11:10.996 18:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.996 18:38:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.996 18:38:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:10.996 18:38:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:10.996 18:38:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.996 18:38:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:10.996 ************************************ 00:11:10.996 START TEST locking_app_on_unlocked_coremask 00:11:10.996 ************************************ 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=114925 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 114925 /var/tmp/spdk.sock 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114925 ']' 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.996 18:38:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:10.996 [2024-07-25 18:38:11.304497] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:10.996 [2024-07-25 18:38:11.304938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114925 ] 00:11:10.996 [2024-07-25 18:38:11.484059] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:10.996 [2024-07-25 18:38:11.484247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.255 [2024-07-25 18:38:11.710499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=114946 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 114946 /var/tmp/spdk2.sock 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 114946 ']' 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.191 18:38:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:12.191 [2024-07-25 18:38:12.696039] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:12.191 [2024-07-25 18:38:12.697167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114946 ] 00:11:12.449 [2024-07-25 18:38:12.869592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.015 [2024-07-25 18:38:13.350757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.916 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.916 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:14.916 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 114946 00:11:14.916 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114946 00:11:14.916 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 114925 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114925 ']' 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 114925 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114925 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114925' 00:11:15.482 killing process with pid 114925 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 114925 00:11:15.482 18:38:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 114925 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 114946 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 114946 ']' 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 114946 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 114946 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 114946' 00:11:22.061 killing process with pid 114946 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 114946 00:11:22.061 18:38:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 114946 00:11:24.601 ************************************ 00:11:24.601 END TEST locking_app_on_unlocked_coremask 00:11:24.601 ************************************ 00:11:24.601 00:11:24.601 real 0m13.682s 00:11:24.601 user 0m13.790s 00:11:24.601 sys 0m1.782s 00:11:24.601 18:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:24.602 18:38:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:24.602 18:38:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:24.602 18:38:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.602 18:38:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:24.602 ************************************ 00:11:24.602 START TEST locking_app_on_locked_coremask 00:11:24.602 ************************************ 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=115128 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 115128 /var/tmp/spdk.sock 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 115128 ']' 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.602 18:38:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:24.602 [2024-07-25 18:38:25.049329] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:24.602 [2024-07-25 18:38:25.049789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115128 ] 00:11:24.860 [2024-07-25 18:38:25.217273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.119 [2024-07-25 18:38:25.484732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.056 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.056 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:26.056 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:26.056 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=115160 00:11:26.056 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 115160 /var/tmp/spdk2.sock 00:11:26.056 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 115160 /var/tmp/spdk2.sock 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 115160 /var/tmp/spdk2.sock 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 115160 ']' 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.057 18:38:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:26.057 [2024-07-25 18:38:26.550306] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:26.057 [2024-07-25 18:38:26.550712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115160 ] 00:11:26.316 [2024-07-25 18:38:26.715109] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 115128 has claimed it. 00:11:26.316 [2024-07-25 18:38:26.715193] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:26.884 ERROR: process (pid: 115160) is no longer running 00:11:26.884 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (115160) - No such process 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 115128 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 115128 00:11:26.884 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 115128 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 115128 ']' 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 115128 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115128 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115128' 00:11:27.143 killing process with pid 115128 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 115128 00:11:27.143 18:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 115128 00:11:30.431 ************************************ 00:11:30.432 END TEST locking_app_on_locked_coremask 00:11:30.432 ************************************ 00:11:30.432 00:11:30.432 real 0m5.709s 00:11:30.432 user 0m5.870s 00:11:30.432 sys 0m1.014s 00:11:30.432 18:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.432 18:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:30.432 18:38:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:30.432 18:38:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:30.432 18:38:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.432 18:38:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:30.432 ************************************ 00:11:30.432 START TEST locking_overlapped_coremask 00:11:30.432 ************************************ 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=115239 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 115239 /var/tmp/spdk.sock 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 115239 ']' 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.432 18:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:30.432 [2024-07-25 18:38:30.855083] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:30.432 [2024-07-25 18:38:30.855569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115239 ] 00:11:30.690 [2024-07-25 18:38:31.051597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.949 [2024-07-25 18:38:31.287606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.949 [2024-07-25 18:38:31.287782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.949 [2024-07-25 18:38:31.287787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=115267 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 115267 /var/tmp/spdk2.sock 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 115267 /var/tmp/spdk2.sock 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 115267 /var/tmp/spdk2.sock 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 115267 ']' 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:31.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.886 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:31.886 [2024-07-25 18:38:32.313249] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:31.886 [2024-07-25 18:38:32.313753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115267 ] 00:11:32.144 [2024-07-25 18:38:32.515018] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115239 has claimed it. 00:11:32.144 [2024-07-25 18:38:32.515122] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:32.403 ERROR: process (pid: 115267) is no longer running 00:11:32.403 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (115267) - No such process 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:32.403 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:32.404 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:32.404 18:38:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 115239 00:11:32.404 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 115239 ']' 00:11:32.404 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 115239 00:11:32.404 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:11:32.662 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.662 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115239 00:11:32.662 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.662 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.662 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115239' 00:11:32.662 killing process with pid 115239 00:11:32.662 18:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 115239 00:11:32.662 18:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 115239 00:11:35.947 ************************************ 00:11:35.947 END TEST locking_overlapped_coremask 00:11:35.947 ************************************ 00:11:35.947 00:11:35.947 real 0m5.197s 00:11:35.947 user 0m13.396s 00:11:35.947 sys 0m0.886s 00:11:35.947 18:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.947 18:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:35.947 18:38:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:35.947 18:38:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:35.947 18:38:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.947 18:38:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:35.947 ************************************ 00:11:35.947 START TEST locking_overlapped_coremask_via_rpc 00:11:35.947 ************************************ 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=115343 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 115343 /var/tmp/spdk.sock 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115343 ']' 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.947 18:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.947 [2024-07-25 18:38:36.103721] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:35.947 [2024-07-25 18:38:36.104162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115343 ] 00:11:35.947 [2024-07-25 18:38:36.277501] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:35.947 [2024-07-25 18:38:36.277822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.947 [2024-07-25 18:38:36.518056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.947 [2024-07-25 18:38:36.518220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.947 [2024-07-25 18:38:36.518224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:36.883 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.883 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:36.883 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=115366 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 115366 /var/tmp/spdk2.sock 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115366 ']' 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.884 18:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.142 [2024-07-25 18:38:37.540309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:37.142 [2024-07-25 18:38:37.541409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115366 ] 00:11:37.401 [2024-07-25 18:38:37.759301] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:37.401 [2024-07-25 18:38:37.759363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:37.968 [2024-07-25 18:38:38.243054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.968 [2024-07-25 18:38:38.243193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.968 [2024-07-25 18:38:38.243199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.872 [2024-07-25 18:38:40.221893] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 115343 has claimed it. 00:11:39.872 request: 00:11:39.872 { 00:11:39.872 "method": "framework_enable_cpumask_locks", 00:11:39.872 "req_id": 1 00:11:39.872 } 00:11:39.872 Got JSON-RPC error response 00:11:39.872 response: 00:11:39.872 { 00:11:39.872 "code": -32603, 00:11:39.872 "message": "Failed to claim CPU core: 2" 00:11:39.872 } 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 115343 /var/tmp/spdk.sock 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115343 ']' 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 115366 /var/tmp/spdk2.sock 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 115366 ']' 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:39.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.872 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:40.131 00:11:40.131 real 0m4.665s 00:11:40.131 user 0m1.449s 00:11:40.131 sys 0m0.286s 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.131 18:38:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.131 ************************************ 00:11:40.131 END TEST locking_overlapped_coremask_via_rpc 00:11:40.131 ************************************ 00:11:40.390 18:38:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:40.390 18:38:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115343 ]] 00:11:40.390 18:38:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115343 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115343 ']' 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115343 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115343 00:11:40.390 killing process with pid 115343 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115343' 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 115343 00:11:40.390 18:38:40 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 115343 00:11:43.714 18:38:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115366 ]] 00:11:43.714 18:38:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115366 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115366 ']' 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115366 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115366 00:11:43.714 killing process with pid 115366 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115366' 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 115366 00:11:43.714 18:38:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 115366 00:11:46.246 18:38:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:46.246 Process with pid 115343 is not found 00:11:46.246 18:38:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:46.246 18:38:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 115343 ]] 00:11:46.246 18:38:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 115343 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115343 ']' 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115343 00:11:46.246 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (115343) - No such process 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 115343 is not found' 00:11:46.246 18:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 115366 ]] 00:11:46.246 18:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 115366 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 115366 ']' 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 115366 00:11:46.246 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (115366) - No such process 00:11:46.246 Process with pid 115366 is not found 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 115366 is not found' 00:11:46.246 18:38:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:46.246 ************************************ 00:11:46.246 END TEST cpu_locks 00:11:46.246 ************************************ 00:11:46.246 00:11:46.246 real 0m57.601s 00:11:46.246 user 1m35.413s 00:11:46.246 sys 0m8.828s 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.246 18:38:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:46.246 ************************************ 00:11:46.246 END TEST event 00:11:46.246 ************************************ 00:11:46.246 00:11:46.246 real 1m32.625s 00:11:46.246 user 2m41.886s 00:11:46.246 sys 0m14.064s 00:11:46.246 18:38:46 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.246 18:38:46 event -- common/autotest_common.sh@10 -- # set +x 00:11:46.246 18:38:46 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:46.246 18:38:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:46.246 18:38:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.246 18:38:46 -- common/autotest_common.sh@10 -- # set +x 00:11:46.246 ************************************ 00:11:46.246 START TEST thread 00:11:46.246 ************************************ 00:11:46.246 18:38:46 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:46.507 * Looking for test storage... 00:11:46.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:46.507 18:38:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:46.507 18:38:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:11:46.507 18:38:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.507 18:38:46 thread -- common/autotest_common.sh@10 -- # set +x 00:11:46.507 ************************************ 00:11:46.507 START TEST thread_poller_perf 00:11:46.507 ************************************ 00:11:46.507 18:38:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:46.507 [2024-07-25 18:38:46.913748] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:46.507 [2024-07-25 18:38:46.914204] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115581 ] 00:11:46.766 [2024-07-25 18:38:47.103258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.025 [2024-07-25 18:38:47.400231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.025 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:48.400 ====================================== 00:11:48.400 busy:2110719998 (cyc) 00:11:48.400 total_run_count: 399000 00:11:48.400 tsc_hz: 2100000000 (cyc) 00:11:48.400 ====================================== 00:11:48.400 poller_cost: 5290 (cyc), 2519 (nsec) 00:11:48.400 ************************************ 00:11:48.400 END TEST thread_poller_perf 00:11:48.400 ************************************ 00:11:48.400 00:11:48.400 real 0m2.017s 00:11:48.400 user 0m1.747s 00:11:48.400 sys 0m0.168s 00:11:48.401 18:38:48 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.401 18:38:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:48.401 18:38:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:48.401 18:38:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:11:48.401 18:38:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.401 18:38:48 thread -- common/autotest_common.sh@10 -- # set +x 00:11:48.401 ************************************ 00:11:48.401 START TEST thread_poller_perf 00:11:48.401 ************************************ 00:11:48.401 18:38:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:48.659 [2024-07-25 18:38:48.990832] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:48.659 [2024-07-25 18:38:48.991247] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115631 ] 00:11:48.659 [2024-07-25 18:38:49.178510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.918 [2024-07-25 18:38:49.406423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.918 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:50.294 ====================================== 00:11:50.294 busy:2103395788 (cyc) 00:11:50.294 total_run_count: 5331000 00:11:50.294 tsc_hz: 2100000000 (cyc) 00:11:50.294 ====================================== 00:11:50.294 poller_cost: 394 (cyc), 187 (nsec) 00:11:50.552 00:11:50.552 real 0m1.939s 00:11:50.552 user 0m1.681s 00:11:50.552 sys 0m0.157s 00:11:50.552 18:38:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.552 18:38:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:50.552 ************************************ 00:11:50.552 END TEST thread_poller_perf 00:11:50.552 ************************************ 00:11:50.552 18:38:50 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:50.552 18:38:50 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:50.552 18:38:50 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:50.552 18:38:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.552 18:38:50 thread -- common/autotest_common.sh@10 -- # set +x 00:11:50.552 ************************************ 00:11:50.552 START TEST thread_spdk_lock 00:11:50.552 ************************************ 00:11:50.552 18:38:50 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:50.552 [2024-07-25 18:38:51.000212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:50.552 [2024-07-25 18:38:51.000661] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115682 ] 00:11:50.810 [2024-07-25 18:38:51.190202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:51.068 [2024-07-25 18:38:51.422986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.068 [2024-07-25 18:38:51.422990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.635 [2024-07-25 18:38:51.936485] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:51.635 [2024-07-25 18:38:51.936754] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:51.635 [2024-07-25 18:38:51.936829] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x562d3bd0d4c0 00:11:51.635 [2024-07-25 18:38:51.947218] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:51.635 [2024-07-25 18:38:51.947393] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:51.635 [2024-07-25 18:38:51.947524] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:51.894 Starting test contend 00:11:51.894 Worker Delay Wait us Hold us Total us 00:11:51.894 0 3 136358 192528 328887 00:11:51.894 1 5 62456 292912 355368 00:11:51.894 PASS test contend 00:11:51.894 Starting test hold_by_poller 00:11:51.894 PASS test hold_by_poller 00:11:51.894 Starting test hold_by_message 00:11:51.894 PASS test hold_by_message 00:11:51.894 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:51.894 100014 assertions passed 00:11:51.894 0 assertions failed 00:11:51.894 ************************************ 00:11:51.894 END TEST thread_spdk_lock 00:11:51.894 ************************************ 00:11:51.894 00:11:51.894 real 0m1.502s 00:11:51.894 user 0m1.753s 00:11:51.894 sys 0m0.173s 00:11:51.894 18:38:52 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.894 18:38:52 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:52.153 ************************************ 00:11:52.153 END TEST thread 00:11:52.153 ************************************ 00:11:52.153 00:11:52.153 real 0m5.770s 00:11:52.153 user 0m5.319s 00:11:52.153 sys 0m0.682s 00:11:52.153 18:38:52 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.153 18:38:52 thread -- common/autotest_common.sh@10 -- # set +x 00:11:52.153 18:38:52 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:11:52.153 18:38:52 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:52.153 18:38:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:52.153 18:38:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.153 18:38:52 -- common/autotest_common.sh@10 -- # set +x 00:11:52.153 ************************************ 00:11:52.153 START TEST app_cmdline 00:11:52.153 ************************************ 00:11:52.153 18:38:52 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:52.153 * Looking for test storage... 00:11:52.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:52.153 18:38:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:52.153 18:38:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=115768 00:11:52.153 18:38:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:52.153 18:38:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 115768 00:11:52.153 18:38:52 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 115768 ']' 00:11:52.153 18:38:52 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.153 18:38:52 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.153 18:38:52 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.153 18:38:52 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.153 18:38:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:52.412 [2024-07-25 18:38:52.758751] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:52.412 [2024-07-25 18:38:52.759161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115768 ] 00:11:52.412 [2024-07-25 18:38:52.923818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.671 [2024-07-25 18:38:53.167325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.606 18:38:54 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.606 18:38:54 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:11:53.606 18:38:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:53.864 { 00:11:53.864 "version": "SPDK v24.09-pre git sha1 704257090", 00:11:53.864 "fields": { 00:11:53.864 "major": 24, 00:11:53.864 "minor": 9, 00:11:53.864 "patch": 0, 00:11:53.864 "suffix": "-pre", 00:11:53.864 "commit": "704257090" 00:11:53.864 } 00:11:53.864 } 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:53.864 18:38:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:53.864 18:38:54 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:54.123 request: 00:11:54.123 { 00:11:54.123 "method": "env_dpdk_get_mem_stats", 00:11:54.123 "req_id": 1 00:11:54.123 } 00:11:54.123 Got JSON-RPC error response 00:11:54.123 response: 00:11:54.123 { 00:11:54.123 "code": -32601, 00:11:54.123 "message": "Method not found" 00:11:54.123 } 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:54.123 18:38:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 115768 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 115768 ']' 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 115768 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115768 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115768' 00:11:54.123 killing process with pid 115768 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@969 -- # kill 115768 00:11:54.123 18:38:54 app_cmdline -- common/autotest_common.sh@974 -- # wait 115768 00:11:57.409 ************************************ 00:11:57.409 END TEST app_cmdline 00:11:57.409 ************************************ 00:11:57.409 00:11:57.409 real 0m4.831s 00:11:57.409 user 0m5.011s 00:11:57.409 sys 0m0.792s 00:11:57.409 18:38:57 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.409 18:38:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:57.409 18:38:57 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:57.409 18:38:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.409 18:38:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.409 18:38:57 -- common/autotest_common.sh@10 -- # set +x 00:11:57.409 ************************************ 00:11:57.409 START TEST version 00:11:57.409 ************************************ 00:11:57.409 18:38:57 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:57.409 * Looking for test storage... 00:11:57.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:57.409 18:38:57 version -- app/version.sh@17 -- # get_header_version major 00:11:57.409 18:38:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # cut -f2 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:57.409 18:38:57 version -- app/version.sh@17 -- # major=24 00:11:57.409 18:38:57 version -- app/version.sh@18 -- # get_header_version minor 00:11:57.409 18:38:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # cut -f2 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:57.409 18:38:57 version -- app/version.sh@18 -- # minor=9 00:11:57.409 18:38:57 version -- app/version.sh@19 -- # get_header_version patch 00:11:57.409 18:38:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # cut -f2 00:11:57.409 18:38:57 version -- app/version.sh@19 -- # patch=0 00:11:57.409 18:38:57 version -- app/version.sh@20 -- # get_header_version suffix 00:11:57.409 18:38:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # cut -f2 00:11:57.409 18:38:57 version -- app/version.sh@14 -- # tr -d '"' 00:11:57.409 18:38:57 version -- app/version.sh@20 -- # suffix=-pre 00:11:57.409 18:38:57 version -- app/version.sh@22 -- # version=24.9 00:11:57.409 18:38:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:57.409 18:38:57 version -- app/version.sh@28 -- # version=24.9rc0 00:11:57.409 18:38:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:57.409 18:38:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:57.409 18:38:57 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:57.409 18:38:57 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:57.409 00:11:57.409 real 0m0.186s 00:11:57.409 user 0m0.100s 00:11:57.409 sys 0m0.128s 00:11:57.409 18:38:57 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.409 18:38:57 version -- common/autotest_common.sh@10 -- # set +x 00:11:57.409 ************************************ 00:11:57.409 END TEST version 00:11:57.409 ************************************ 00:11:57.409 18:38:57 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:11:57.409 18:38:57 -- spdk/autotest.sh@193 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:57.409 18:38:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.409 18:38:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.409 18:38:57 -- common/autotest_common.sh@10 -- # set +x 00:11:57.409 ************************************ 00:11:57.409 START TEST blockdev_general 00:11:57.409 ************************************ 00:11:57.409 18:38:57 blockdev_general -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:57.409 * Looking for test storage... 00:11:57.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:57.409 18:38:57 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=115963 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 115963 00:11:57.409 18:38:57 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:57.409 18:38:57 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 115963 ']' 00:11:57.410 18:38:57 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.410 18:38:57 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.410 18:38:57 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.410 18:38:57 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.410 18:38:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:57.410 [2024-07-25 18:38:57.947232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:57.410 [2024-07-25 18:38:57.947498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115963 ] 00:11:57.668 [2024-07-25 18:38:58.131502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.927 [2024-07-25 18:38:58.364461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.493 18:38:58 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.493 18:38:58 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:11:58.493 18:38:58 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:58.493 18:38:58 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:11:58.493 18:38:58 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:58.493 18:38:58 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.493 18:38:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:59.428 [2024-07-25 18:38:59.715508] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:59.428 [2024-07-25 18:38:59.715654] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:59.428 00:11:59.428 [2024-07-25 18:38:59.723419] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:59.428 [2024-07-25 18:38:59.723483] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:59.428 00:11:59.428 Malloc0 00:11:59.428 Malloc1 00:11:59.428 Malloc2 00:11:59.428 Malloc3 00:11:59.428 Malloc4 00:11:59.686 Malloc5 00:11:59.686 Malloc6 00:11:59.686 Malloc7 00:11:59.686 Malloc8 00:11:59.686 Malloc9 00:11:59.686 [2024-07-25 18:39:00.213445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:59.686 [2024-07-25 18:39:00.213549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.686 [2024-07-25 18:39:00.213587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:59.686 [2024-07-25 18:39:00.213652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.686 [2024-07-25 18:39:00.216285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.686 [2024-07-25 18:39:00.216333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:59.686 TestPT 00:11:59.686 18:39:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.686 18:39:00 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:59.944 5000+0 records in 00:11:59.944 5000+0 records out 00:11:59.944 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0356781 s, 287 MB/s 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:59.944 AIO0 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:59.944 18:39:00 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.944 18:39:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:00.204 18:39:00 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.204 18:39:00 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:00.204 18:39:00 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:00.206 18:39:00 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "52c61ca7-1075-4064-91e3-15eaaf0dcdde"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "52c61ca7-1075-4064-91e3-15eaaf0dcdde",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "242a2af4-6ebf-557c-9244-9b0f53b4b62d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "242a2af4-6ebf-557c-9244-9b0f53b4b62d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1f5e17c1-6262-529a-b6ba-4fe54cfcea64"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1f5e17c1-6262-529a-b6ba-4fe54cfcea64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c869e3e9-18ba-5e76-b6db-fe4801adf807"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c869e3e9-18ba-5e76-b6db-fe4801adf807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "88967992-8eab-57db-906b-1dd8747517bc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "88967992-8eab-57db-906b-1dd8747517bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "66753044-af15-5910-a7d4-ecf4aa576518"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66753044-af15-5910-a7d4-ecf4aa576518",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4bdc699c-e210-5e33-8c43-bf51682fedfc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4bdc699c-e210-5e33-8c43-bf51682fedfc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9cddb254-3f6c-58cc-a12d-7eb4a447c231"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9cddb254-3f6c-58cc-a12d-7eb4a447c231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4db4b2fa-e33c-51e6-8973-0ba133f73e9f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4db4b2fa-e33c-51e6-8973-0ba133f73e9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "b1f233be-b5db-56c8-bb8e-4638dc1fb139"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b1f233be-b5db-56c8-bb8e-4638dc1fb139",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "121f017d-9d04-5148-a371-6eda2a650486"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "121f017d-9d04-5148-a371-6eda2a650486",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2d4a7067-69e6-52b2-963e-95b2050f9992"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2d4a7067-69e6-52b2-963e-95b2050f9992",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4a5b3a3b-9fdc-407b-a3a1-221f8a569fec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "df345052-567b-40ac-9d4e-9be0e7f494f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "6f5dc2b2-33be-4b8d-9e55-bee079227419"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6f5dc2b2-33be-4b8d-9e55-bee079227419",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6f5dc2b2-33be-4b8d-9e55-bee079227419",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "94b1637d-df7f-4aaa-bceb-e586480f1475",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a9002dc1-0994-4ea6-aa4d-2e238e16b513",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ca16d4ac-f761-48c3-82df-2ceafee19fa2"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca16d4ac-f761-48c3-82df-2ceafee19fa2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ca16d4ac-f761-48c3-82df-2ceafee19fa2",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "77ce8126-7843-4046-8e13-400bab53b9a9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ec8e43a9-169b-4478-b9d6-44ebbafabc20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "faed24f7-f832-454a-b0ae-25bf1d490be8"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "faed24f7-f832-454a-b0ae-25bf1d490be8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:00.206 18:39:00 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:00.206 18:39:00 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:12:00.206 18:39:00 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:00.206 18:39:00 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 115963 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 115963 ']' 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 115963 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 115963 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:00.206 killing process with pid 115963 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 115963' 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@969 -- # kill 115963 00:12:00.206 18:39:00 blockdev_general -- common/autotest_common.sh@974 -- # wait 115963 00:12:04.387 18:39:04 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:04.387 18:39:04 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:04.387 18:39:04 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:04.387 18:39:04 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.387 18:39:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:04.387 ************************************ 00:12:04.387 START TEST bdev_hello_world 00:12:04.387 ************************************ 00:12:04.388 18:39:04 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:04.388 [2024-07-25 18:39:04.706409] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:04.388 [2024-07-25 18:39:04.706681] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116075 ] 00:12:04.388 [2024-07-25 18:39:04.895483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.646 [2024-07-25 18:39:05.124902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.211 [2024-07-25 18:39:05.607171] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:05.211 [2024-07-25 18:39:05.607264] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:05.211 [2024-07-25 18:39:05.615070] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:05.211 [2024-07-25 18:39:05.615113] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:05.211 [2024-07-25 18:39:05.623105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:05.211 [2024-07-25 18:39:05.623168] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:05.211 [2024-07-25 18:39:05.623223] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:05.473 [2024-07-25 18:39:05.855053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:05.473 [2024-07-25 18:39:05.855150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.473 [2024-07-25 18:39:05.855188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:05.473 [2024-07-25 18:39:05.855217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.473 [2024-07-25 18:39:05.857744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.473 [2024-07-25 18:39:05.857801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:05.755 [2024-07-25 18:39:06.227228] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:05.755 [2024-07-25 18:39:06.227388] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:05.755 [2024-07-25 18:39:06.227530] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:05.755 [2024-07-25 18:39:06.227705] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:05.755 [2024-07-25 18:39:06.227915] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:05.755 [2024-07-25 18:39:06.228005] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:05.755 [2024-07-25 18:39:06.228176] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:05.755 00:12:05.755 [2024-07-25 18:39:06.228281] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:09.036 00:12:09.036 real 0m4.412s 00:12:09.036 user 0m3.711s 00:12:09.036 sys 0m0.557s 00:12:09.036 18:39:09 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.036 18:39:09 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:09.036 ************************************ 00:12:09.036 END TEST bdev_hello_world 00:12:09.036 ************************************ 00:12:09.036 18:39:09 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:09.036 18:39:09 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:09.036 18:39:09 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.036 18:39:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.036 ************************************ 00:12:09.036 START TEST bdev_bounds 00:12:09.036 ************************************ 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=116149 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:09.036 Process bdevio pid: 116149 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 116149' 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 116149 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 116149 ']' 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.036 18:39:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:09.036 [2024-07-25 18:39:09.158621] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:09.036 [2024-07-25 18:39:09.158802] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116149 ] 00:12:09.036 [2024-07-25 18:39:09.333521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.036 [2024-07-25 18:39:09.581126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.036 [2024-07-25 18:39:09.581070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.036 [2024-07-25 18:39:09.581135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.602 [2024-07-25 18:39:10.071298] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:09.602 [2024-07-25 18:39:10.071407] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:09.602 [2024-07-25 18:39:10.079229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:09.602 [2024-07-25 18:39:10.079282] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:09.602 [2024-07-25 18:39:10.087253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:09.602 [2024-07-25 18:39:10.087315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:09.602 [2024-07-25 18:39:10.087364] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:09.860 [2024-07-25 18:39:10.338882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:09.860 [2024-07-25 18:39:10.339006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.860 [2024-07-25 18:39:10.339053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:09.860 [2024-07-25 18:39:10.339097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.860 [2024-07-25 18:39:10.341807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.860 [2024-07-25 18:39:10.341868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:10.427 18:39:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.427 18:39:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:10.427 18:39:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:10.427 I/O targets: 00:12:10.427 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:10.427 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:10.427 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:10.427 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:10.427 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:10.427 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:10.427 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:10.427 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:10.427 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:10.427 00:12:10.427 00:12:10.427 CUnit - A unit testing framework for C - Version 2.1-3 00:12:10.427 http://cunit.sourceforge.net/ 00:12:10.427 00:12:10.427 00:12:10.427 Suite: bdevio tests on: AIO0 00:12:10.427 Test: blockdev write read block ...passed 00:12:10.427 Test: blockdev write zeroes read block ...passed 00:12:10.427 Test: blockdev write zeroes read no split ...passed 00:12:10.427 Test: blockdev write zeroes read split ...passed 00:12:10.427 Test: blockdev write zeroes read split partial ...passed 00:12:10.427 Test: blockdev reset ...passed 00:12:10.427 Test: blockdev write read 8 blocks ...passed 00:12:10.427 Test: blockdev write read size > 128k ...passed 00:12:10.427 Test: blockdev write read invalid size ...passed 00:12:10.427 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.427 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.427 Test: blockdev write read max offset ...passed 00:12:10.427 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.427 Test: blockdev writev readv 8 blocks ...passed 00:12:10.427 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.427 Test: blockdev writev readv block ...passed 00:12:10.427 Test: blockdev writev readv size > 128k ...passed 00:12:10.427 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.427 Test: blockdev comparev and writev ...passed 00:12:10.427 Test: blockdev nvme passthru rw ...passed 00:12:10.427 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.427 Test: blockdev nvme admin passthru ...passed 00:12:10.427 Test: blockdev copy ...passed 00:12:10.427 Suite: bdevio tests on: raid1 00:12:10.427 Test: blockdev write read block ...passed 00:12:10.427 Test: blockdev write zeroes read block ...passed 00:12:10.427 Test: blockdev write zeroes read no split ...passed 00:12:10.686 Test: blockdev write zeroes read split ...passed 00:12:10.686 Test: blockdev write zeroes read split partial ...passed 00:12:10.686 Test: blockdev reset ...passed 00:12:10.686 Test: blockdev write read 8 blocks ...passed 00:12:10.686 Test: blockdev write read size > 128k ...passed 00:12:10.686 Test: blockdev write read invalid size ...passed 00:12:10.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.686 Test: blockdev write read max offset ...passed 00:12:10.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.686 Test: blockdev writev readv 8 blocks ...passed 00:12:10.686 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.686 Test: blockdev writev readv block ...passed 00:12:10.686 Test: blockdev writev readv size > 128k ...passed 00:12:10.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.686 Test: blockdev comparev and writev ...passed 00:12:10.686 Test: blockdev nvme passthru rw ...passed 00:12:10.686 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.686 Test: blockdev nvme admin passthru ...passed 00:12:10.686 Test: blockdev copy ...passed 00:12:10.686 Suite: bdevio tests on: concat0 00:12:10.686 Test: blockdev write read block ...passed 00:12:10.686 Test: blockdev write zeroes read block ...passed 00:12:10.686 Test: blockdev write zeroes read no split ...passed 00:12:10.686 Test: blockdev write zeroes read split ...passed 00:12:10.686 Test: blockdev write zeroes read split partial ...passed 00:12:10.686 Test: blockdev reset ...passed 00:12:10.686 Test: blockdev write read 8 blocks ...passed 00:12:10.686 Test: blockdev write read size > 128k ...passed 00:12:10.686 Test: blockdev write read invalid size ...passed 00:12:10.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.686 Test: blockdev write read max offset ...passed 00:12:10.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.686 Test: blockdev writev readv 8 blocks ...passed 00:12:10.686 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.686 Test: blockdev writev readv block ...passed 00:12:10.686 Test: blockdev writev readv size > 128k ...passed 00:12:10.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.686 Test: blockdev comparev and writev ...passed 00:12:10.686 Test: blockdev nvme passthru rw ...passed 00:12:10.686 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.686 Test: blockdev nvme admin passthru ...passed 00:12:10.686 Test: blockdev copy ...passed 00:12:10.686 Suite: bdevio tests on: raid0 00:12:10.686 Test: blockdev write read block ...passed 00:12:10.686 Test: blockdev write zeroes read block ...passed 00:12:10.686 Test: blockdev write zeroes read no split ...passed 00:12:10.686 Test: blockdev write zeroes read split ...passed 00:12:10.686 Test: blockdev write zeroes read split partial ...passed 00:12:10.686 Test: blockdev reset ...passed 00:12:10.686 Test: blockdev write read 8 blocks ...passed 00:12:10.686 Test: blockdev write read size > 128k ...passed 00:12:10.686 Test: blockdev write read invalid size ...passed 00:12:10.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.686 Test: blockdev write read max offset ...passed 00:12:10.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.686 Test: blockdev writev readv 8 blocks ...passed 00:12:10.686 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.686 Test: blockdev writev readv block ...passed 00:12:10.686 Test: blockdev writev readv size > 128k ...passed 00:12:10.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.686 Test: blockdev comparev and writev ...passed 00:12:10.686 Test: blockdev nvme passthru rw ...passed 00:12:10.686 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.686 Test: blockdev nvme admin passthru ...passed 00:12:10.686 Test: blockdev copy ...passed 00:12:10.686 Suite: bdevio tests on: TestPT 00:12:10.686 Test: blockdev write read block ...passed 00:12:10.686 Test: blockdev write zeroes read block ...passed 00:12:10.686 Test: blockdev write zeroes read no split ...passed 00:12:10.686 Test: blockdev write zeroes read split ...passed 00:12:10.686 Test: blockdev write zeroes read split partial ...passed 00:12:10.686 Test: blockdev reset ...passed 00:12:10.686 Test: blockdev write read 8 blocks ...passed 00:12:10.686 Test: blockdev write read size > 128k ...passed 00:12:10.686 Test: blockdev write read invalid size ...passed 00:12:10.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.686 Test: blockdev write read max offset ...passed 00:12:10.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.686 Test: blockdev writev readv 8 blocks ...passed 00:12:10.686 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.686 Test: blockdev writev readv block ...passed 00:12:10.686 Test: blockdev writev readv size > 128k ...passed 00:12:10.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.686 Test: blockdev comparev and writev ...passed 00:12:10.686 Test: blockdev nvme passthru rw ...passed 00:12:10.686 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.686 Test: blockdev nvme admin passthru ...passed 00:12:10.686 Test: blockdev copy ...passed 00:12:10.686 Suite: bdevio tests on: Malloc2p7 00:12:10.686 Test: blockdev write read block ...passed 00:12:10.686 Test: blockdev write zeroes read block ...passed 00:12:10.945 Test: blockdev write zeroes read no split ...passed 00:12:10.945 Test: blockdev write zeroes read split ...passed 00:12:10.945 Test: blockdev write zeroes read split partial ...passed 00:12:10.945 Test: blockdev reset ...passed 00:12:10.945 Test: blockdev write read 8 blocks ...passed 00:12:10.945 Test: blockdev write read size > 128k ...passed 00:12:10.945 Test: blockdev write read invalid size ...passed 00:12:10.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.945 Test: blockdev write read max offset ...passed 00:12:10.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.945 Test: blockdev writev readv 8 blocks ...passed 00:12:10.945 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.945 Test: blockdev writev readv block ...passed 00:12:10.945 Test: blockdev writev readv size > 128k ...passed 00:12:10.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.945 Test: blockdev comparev and writev ...passed 00:12:10.945 Test: blockdev nvme passthru rw ...passed 00:12:10.945 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.945 Test: blockdev nvme admin passthru ...passed 00:12:10.945 Test: blockdev copy ...passed 00:12:10.945 Suite: bdevio tests on: Malloc2p6 00:12:10.945 Test: blockdev write read block ...passed 00:12:10.945 Test: blockdev write zeroes read block ...passed 00:12:10.945 Test: blockdev write zeroes read no split ...passed 00:12:10.945 Test: blockdev write zeroes read split ...passed 00:12:10.945 Test: blockdev write zeroes read split partial ...passed 00:12:10.945 Test: blockdev reset ...passed 00:12:10.945 Test: blockdev write read 8 blocks ...passed 00:12:10.945 Test: blockdev write read size > 128k ...passed 00:12:10.945 Test: blockdev write read invalid size ...passed 00:12:10.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.945 Test: blockdev write read max offset ...passed 00:12:10.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.945 Test: blockdev writev readv 8 blocks ...passed 00:12:10.945 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.945 Test: blockdev writev readv block ...passed 00:12:10.945 Test: blockdev writev readv size > 128k ...passed 00:12:10.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.945 Test: blockdev comparev and writev ...passed 00:12:10.945 Test: blockdev nvme passthru rw ...passed 00:12:10.945 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.945 Test: blockdev nvme admin passthru ...passed 00:12:10.945 Test: blockdev copy ...passed 00:12:10.945 Suite: bdevio tests on: Malloc2p5 00:12:10.946 Test: blockdev write read block ...passed 00:12:10.946 Test: blockdev write zeroes read block ...passed 00:12:10.946 Test: blockdev write zeroes read no split ...passed 00:12:10.946 Test: blockdev write zeroes read split ...passed 00:12:10.946 Test: blockdev write zeroes read split partial ...passed 00:12:10.946 Test: blockdev reset ...passed 00:12:10.946 Test: blockdev write read 8 blocks ...passed 00:12:10.946 Test: blockdev write read size > 128k ...passed 00:12:10.946 Test: blockdev write read invalid size ...passed 00:12:10.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.946 Test: blockdev write read max offset ...passed 00:12:10.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.946 Test: blockdev writev readv 8 blocks ...passed 00:12:10.946 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.946 Test: blockdev writev readv block ...passed 00:12:10.946 Test: blockdev writev readv size > 128k ...passed 00:12:10.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.946 Test: blockdev comparev and writev ...passed 00:12:10.946 Test: blockdev nvme passthru rw ...passed 00:12:10.946 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.946 Test: blockdev nvme admin passthru ...passed 00:12:10.946 Test: blockdev copy ...passed 00:12:10.946 Suite: bdevio tests on: Malloc2p4 00:12:10.946 Test: blockdev write read block ...passed 00:12:10.946 Test: blockdev write zeroes read block ...passed 00:12:10.946 Test: blockdev write zeroes read no split ...passed 00:12:10.946 Test: blockdev write zeroes read split ...passed 00:12:10.946 Test: blockdev write zeroes read split partial ...passed 00:12:10.946 Test: blockdev reset ...passed 00:12:10.946 Test: blockdev write read 8 blocks ...passed 00:12:10.946 Test: blockdev write read size > 128k ...passed 00:12:10.946 Test: blockdev write read invalid size ...passed 00:12:10.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:10.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:10.946 Test: blockdev write read max offset ...passed 00:12:10.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:10.946 Test: blockdev writev readv 8 blocks ...passed 00:12:10.946 Test: blockdev writev readv 30 x 1block ...passed 00:12:10.946 Test: blockdev writev readv block ...passed 00:12:10.946 Test: blockdev writev readv size > 128k ...passed 00:12:10.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:10.946 Test: blockdev comparev and writev ...passed 00:12:10.946 Test: blockdev nvme passthru rw ...passed 00:12:10.946 Test: blockdev nvme passthru vendor specific ...passed 00:12:10.946 Test: blockdev nvme admin passthru ...passed 00:12:10.946 Test: blockdev copy ...passed 00:12:10.946 Suite: bdevio tests on: Malloc2p3 00:12:10.946 Test: blockdev write read block ...passed 00:12:10.946 Test: blockdev write zeroes read block ...passed 00:12:10.946 Test: blockdev write zeroes read no split ...passed 00:12:11.204 Test: blockdev write zeroes read split ...passed 00:12:11.204 Test: blockdev write zeroes read split partial ...passed 00:12:11.204 Test: blockdev reset ...passed 00:12:11.204 Test: blockdev write read 8 blocks ...passed 00:12:11.204 Test: blockdev write read size > 128k ...passed 00:12:11.204 Test: blockdev write read invalid size ...passed 00:12:11.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:11.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:11.204 Test: blockdev write read max offset ...passed 00:12:11.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:11.204 Test: blockdev writev readv 8 blocks ...passed 00:12:11.204 Test: blockdev writev readv 30 x 1block ...passed 00:12:11.205 Test: blockdev writev readv block ...passed 00:12:11.205 Test: blockdev writev readv size > 128k ...passed 00:12:11.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:11.205 Test: blockdev comparev and writev ...passed 00:12:11.205 Test: blockdev nvme passthru rw ...passed 00:12:11.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:11.205 Test: blockdev nvme admin passthru ...passed 00:12:11.205 Test: blockdev copy ...passed 00:12:11.205 Suite: bdevio tests on: Malloc2p2 00:12:11.205 Test: blockdev write read block ...passed 00:12:11.205 Test: blockdev write zeroes read block ...passed 00:12:11.205 Test: blockdev write zeroes read no split ...passed 00:12:11.205 Test: blockdev write zeroes read split ...passed 00:12:11.205 Test: blockdev write zeroes read split partial ...passed 00:12:11.205 Test: blockdev reset ...passed 00:12:11.205 Test: blockdev write read 8 blocks ...passed 00:12:11.205 Test: blockdev write read size > 128k ...passed 00:12:11.205 Test: blockdev write read invalid size ...passed 00:12:11.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:11.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:11.205 Test: blockdev write read max offset ...passed 00:12:11.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:11.205 Test: blockdev writev readv 8 blocks ...passed 00:12:11.205 Test: blockdev writev readv 30 x 1block ...passed 00:12:11.205 Test: blockdev writev readv block ...passed 00:12:11.205 Test: blockdev writev readv size > 128k ...passed 00:12:11.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:11.205 Test: blockdev comparev and writev ...passed 00:12:11.205 Test: blockdev nvme passthru rw ...passed 00:12:11.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:11.205 Test: blockdev nvme admin passthru ...passed 00:12:11.205 Test: blockdev copy ...passed 00:12:11.205 Suite: bdevio tests on: Malloc2p1 00:12:11.205 Test: blockdev write read block ...passed 00:12:11.205 Test: blockdev write zeroes read block ...passed 00:12:11.205 Test: blockdev write zeroes read no split ...passed 00:12:11.205 Test: blockdev write zeroes read split ...passed 00:12:11.205 Test: blockdev write zeroes read split partial ...passed 00:12:11.205 Test: blockdev reset ...passed 00:12:11.205 Test: blockdev write read 8 blocks ...passed 00:12:11.205 Test: blockdev write read size > 128k ...passed 00:12:11.205 Test: blockdev write read invalid size ...passed 00:12:11.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:11.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:11.205 Test: blockdev write read max offset ...passed 00:12:11.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:11.205 Test: blockdev writev readv 8 blocks ...passed 00:12:11.205 Test: blockdev writev readv 30 x 1block ...passed 00:12:11.205 Test: blockdev writev readv block ...passed 00:12:11.205 Test: blockdev writev readv size > 128k ...passed 00:12:11.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:11.205 Test: blockdev comparev and writev ...passed 00:12:11.205 Test: blockdev nvme passthru rw ...passed 00:12:11.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:11.205 Test: blockdev nvme admin passthru ...passed 00:12:11.205 Test: blockdev copy ...passed 00:12:11.205 Suite: bdevio tests on: Malloc2p0 00:12:11.205 Test: blockdev write read block ...passed 00:12:11.205 Test: blockdev write zeroes read block ...passed 00:12:11.205 Test: blockdev write zeroes read no split ...passed 00:12:11.205 Test: blockdev write zeroes read split ...passed 00:12:11.205 Test: blockdev write zeroes read split partial ...passed 00:12:11.205 Test: blockdev reset ...passed 00:12:11.205 Test: blockdev write read 8 blocks ...passed 00:12:11.205 Test: blockdev write read size > 128k ...passed 00:12:11.205 Test: blockdev write read invalid size ...passed 00:12:11.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:11.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:11.205 Test: blockdev write read max offset ...passed 00:12:11.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:11.205 Test: blockdev writev readv 8 blocks ...passed 00:12:11.205 Test: blockdev writev readv 30 x 1block ...passed 00:12:11.205 Test: blockdev writev readv block ...passed 00:12:11.205 Test: blockdev writev readv size > 128k ...passed 00:12:11.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:11.205 Test: blockdev comparev and writev ...passed 00:12:11.205 Test: blockdev nvme passthru rw ...passed 00:12:11.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:11.205 Test: blockdev nvme admin passthru ...passed 00:12:11.205 Test: blockdev copy ...passed 00:12:11.205 Suite: bdevio tests on: Malloc1p1 00:12:11.205 Test: blockdev write read block ...passed 00:12:11.205 Test: blockdev write zeroes read block ...passed 00:12:11.205 Test: blockdev write zeroes read no split ...passed 00:12:11.464 Test: blockdev write zeroes read split ...passed 00:12:11.464 Test: blockdev write zeroes read split partial ...passed 00:12:11.464 Test: blockdev reset ...passed 00:12:11.464 Test: blockdev write read 8 blocks ...passed 00:12:11.464 Test: blockdev write read size > 128k ...passed 00:12:11.464 Test: blockdev write read invalid size ...passed 00:12:11.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:11.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:11.464 Test: blockdev write read max offset ...passed 00:12:11.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:11.464 Test: blockdev writev readv 8 blocks ...passed 00:12:11.464 Test: blockdev writev readv 30 x 1block ...passed 00:12:11.464 Test: blockdev writev readv block ...passed 00:12:11.464 Test: blockdev writev readv size > 128k ...passed 00:12:11.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:11.464 Test: blockdev comparev and writev ...passed 00:12:11.464 Test: blockdev nvme passthru rw ...passed 00:12:11.464 Test: blockdev nvme passthru vendor specific ...passed 00:12:11.464 Test: blockdev nvme admin passthru ...passed 00:12:11.464 Test: blockdev copy ...passed 00:12:11.464 Suite: bdevio tests on: Malloc1p0 00:12:11.464 Test: blockdev write read block ...passed 00:12:11.464 Test: blockdev write zeroes read block ...passed 00:12:11.464 Test: blockdev write zeroes read no split ...passed 00:12:11.464 Test: blockdev write zeroes read split ...passed 00:12:11.464 Test: blockdev write zeroes read split partial ...passed 00:12:11.464 Test: blockdev reset ...passed 00:12:11.464 Test: blockdev write read 8 blocks ...passed 00:12:11.464 Test: blockdev write read size > 128k ...passed 00:12:11.464 Test: blockdev write read invalid size ...passed 00:12:11.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:11.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:11.464 Test: blockdev write read max offset ...passed 00:12:11.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:11.464 Test: blockdev writev readv 8 blocks ...passed 00:12:11.464 Test: blockdev writev readv 30 x 1block ...passed 00:12:11.464 Test: blockdev writev readv block ...passed 00:12:11.464 Test: blockdev writev readv size > 128k ...passed 00:12:11.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:11.464 Test: blockdev comparev and writev ...passed 00:12:11.464 Test: blockdev nvme passthru rw ...passed 00:12:11.464 Test: blockdev nvme passthru vendor specific ...passed 00:12:11.464 Test: blockdev nvme admin passthru ...passed 00:12:11.464 Test: blockdev copy ...passed 00:12:11.464 Suite: bdevio tests on: Malloc0 00:12:11.464 Test: blockdev write read block ...passed 00:12:11.464 Test: blockdev write zeroes read block ...passed 00:12:11.465 Test: blockdev write zeroes read no split ...passed 00:12:11.465 Test: blockdev write zeroes read split ...passed 00:12:11.465 Test: blockdev write zeroes read split partial ...passed 00:12:11.465 Test: blockdev reset ...passed 00:12:11.465 Test: blockdev write read 8 blocks ...passed 00:12:11.465 Test: blockdev write read size > 128k ...passed 00:12:11.465 Test: blockdev write read invalid size ...passed 00:12:11.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:11.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:11.465 Test: blockdev write read max offset ...passed 00:12:11.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:11.465 Test: blockdev writev readv 8 blocks ...passed 00:12:11.465 Test: blockdev writev readv 30 x 1block ...passed 00:12:11.465 Test: blockdev writev readv block ...passed 00:12:11.465 Test: blockdev writev readv size > 128k ...passed 00:12:11.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:11.465 Test: blockdev comparev and writev ...passed 00:12:11.465 Test: blockdev nvme passthru rw ...passed 00:12:11.465 Test: blockdev nvme passthru vendor specific ...passed 00:12:11.465 Test: blockdev nvme admin passthru ...passed 00:12:11.465 Test: blockdev copy ...passed 00:12:11.465 00:12:11.465 Run Summary: Type Total Ran Passed Failed Inactive 00:12:11.465 suites 16 16 n/a 0 0 00:12:11.465 tests 368 368 368 0 0 00:12:11.465 asserts 2224 2224 2224 0 n/a 00:12:11.465 00:12:11.465 Elapsed time = 3.061 seconds 00:12:11.465 0 00:12:11.465 18:39:11 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 116149 00:12:11.465 18:39:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 116149 ']' 00:12:11.465 18:39:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 116149 00:12:11.465 18:39:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:11.465 18:39:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.465 18:39:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116149 00:12:11.465 18:39:12 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:11.465 18:39:12 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:11.465 killing process with pid 116149 00:12:11.465 18:39:12 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116149' 00:12:11.465 18:39:12 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 116149 00:12:11.465 18:39:12 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 116149 00:12:13.996 18:39:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:13.996 00:12:13.996 real 0m5.161s 00:12:13.996 user 0m13.188s 00:12:13.996 sys 0m0.632s 00:12:13.996 18:39:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.996 18:39:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:13.996 ************************************ 00:12:13.996 END TEST bdev_bounds 00:12:13.996 ************************************ 00:12:13.996 18:39:14 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:13.996 18:39:14 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:13.996 18:39:14 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.996 18:39:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:13.996 ************************************ 00:12:13.996 START TEST bdev_nbd 00:12:13.996 ************************************ 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=116245 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 116245 /var/tmp/spdk-nbd.sock 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 116245 ']' 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:13.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:13.996 18:39:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:13.996 [2024-07-25 18:39:14.403421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:13.996 [2024-07-25 18:39:14.403579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.255 [2024-07-25 18:39:14.569384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.255 [2024-07-25 18:39:14.777003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.820 [2024-07-25 18:39:15.153160] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:14.820 [2024-07-25 18:39:15.153262] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:14.820 [2024-07-25 18:39:15.161107] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:14.820 [2024-07-25 18:39:15.161168] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:14.820 [2024-07-25 18:39:15.169127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:14.820 [2024-07-25 18:39:15.169213] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:14.820 [2024-07-25 18:39:15.169261] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:14.820 [2024-07-25 18:39:15.362859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:14.820 [2024-07-25 18:39:15.362962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.820 [2024-07-25 18:39:15.363015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:14.820 [2024-07-25 18:39:15.363059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.820 [2024-07-25 18:39:15.365670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.820 [2024-07-25 18:39:15.365734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.387 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:15.645 18:39:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.645 1+0 records in 00:12:15.645 1+0 records out 00:12:15.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292247 s, 14.0 MB/s 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.645 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:15.903 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.903 1+0 records in 00:12:15.903 1+0 records out 00:12:15.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288681 s, 14.2 MB/s 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:15.904 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.162 1+0 records in 00:12:16.162 1+0 records out 00:12:16.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066419 s, 6.2 MB/s 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.162 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.420 1+0 records in 00:12:16.420 1+0 records out 00:12:16.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404276 s, 10.1 MB/s 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.420 18:39:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.678 1+0 records in 00:12:16.678 1+0 records out 00:12:16.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041387 s, 9.9 MB/s 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.678 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.936 1+0 records in 00:12:16.936 1+0 records out 00:12:16.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366262 s, 11.2 MB/s 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:16.936 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.502 1+0 records in 00:12:17.502 1+0 records out 00:12:17.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437174 s, 9.4 MB/s 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.502 18:39:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.760 1+0 records in 00:12:17.760 1+0 records out 00:12:17.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545978 s, 7.5 MB/s 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:17.760 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.019 1+0 records in 00:12:18.019 1+0 records out 00:12:18.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443989 s, 9.2 MB/s 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.019 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.277 1+0 records in 00:12:18.277 1+0 records out 00:12:18.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495192 s, 8.3 MB/s 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:18.277 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:18.278 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.278 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.278 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.536 1+0 records in 00:12:18.536 1+0 records out 00:12:18.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513382 s, 8.0 MB/s 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.536 18:39:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:18.536 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:18.536 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.536 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.536 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.794 1+0 records in 00:12:18.794 1+0 records out 00:12:18.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939883 s, 4.4 MB/s 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:18.794 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.360 1+0 records in 00:12:19.360 1+0 records out 00:12:19.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767265 s, 5.3 MB/s 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:19.360 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.618 1+0 records in 00:12:19.618 1+0 records out 00:12:19.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590343 s, 6.9 MB/s 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:19.618 18:39:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.877 1+0 records in 00:12:19.877 1+0 records out 00:12:19.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545572 s, 7.5 MB/s 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:19.877 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.135 1+0 records in 00:12:20.135 1+0 records out 00:12:20.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130381 s, 3.1 MB/s 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:20.135 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:20.394 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd0", 00:12:20.394 "bdev_name": "Malloc0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd1", 00:12:20.394 "bdev_name": "Malloc1p0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd2", 00:12:20.394 "bdev_name": "Malloc1p1" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd3", 00:12:20.394 "bdev_name": "Malloc2p0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd4", 00:12:20.394 "bdev_name": "Malloc2p1" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd5", 00:12:20.394 "bdev_name": "Malloc2p2" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd6", 00:12:20.394 "bdev_name": "Malloc2p3" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd7", 00:12:20.394 "bdev_name": "Malloc2p4" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd8", 00:12:20.394 "bdev_name": "Malloc2p5" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd9", 00:12:20.394 "bdev_name": "Malloc2p6" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd10", 00:12:20.394 "bdev_name": "Malloc2p7" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd11", 00:12:20.394 "bdev_name": "TestPT" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd12", 00:12:20.394 "bdev_name": "raid0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd13", 00:12:20.394 "bdev_name": "concat0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd14", 00:12:20.394 "bdev_name": "raid1" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd15", 00:12:20.394 "bdev_name": "AIO0" 00:12:20.394 } 00:12:20.394 ]' 00:12:20.394 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:20.394 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd0", 00:12:20.394 "bdev_name": "Malloc0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd1", 00:12:20.394 "bdev_name": "Malloc1p0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd2", 00:12:20.394 "bdev_name": "Malloc1p1" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd3", 00:12:20.394 "bdev_name": "Malloc2p0" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd4", 00:12:20.394 "bdev_name": "Malloc2p1" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd5", 00:12:20.394 "bdev_name": "Malloc2p2" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd6", 00:12:20.394 "bdev_name": "Malloc2p3" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd7", 00:12:20.394 "bdev_name": "Malloc2p4" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd8", 00:12:20.394 "bdev_name": "Malloc2p5" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd9", 00:12:20.394 "bdev_name": "Malloc2p6" 00:12:20.394 }, 00:12:20.394 { 00:12:20.394 "nbd_device": "/dev/nbd10", 00:12:20.394 "bdev_name": "Malloc2p7" 00:12:20.394 }, 00:12:20.394 { 00:12:20.395 "nbd_device": "/dev/nbd11", 00:12:20.395 "bdev_name": "TestPT" 00:12:20.395 }, 00:12:20.395 { 00:12:20.395 "nbd_device": "/dev/nbd12", 00:12:20.395 "bdev_name": "raid0" 00:12:20.395 }, 00:12:20.395 { 00:12:20.395 "nbd_device": "/dev/nbd13", 00:12:20.395 "bdev_name": "concat0" 00:12:20.395 }, 00:12:20.395 { 00:12:20.395 "nbd_device": "/dev/nbd14", 00:12:20.395 "bdev_name": "raid1" 00:12:20.395 }, 00:12:20.395 { 00:12:20.395 "nbd_device": "/dev/nbd15", 00:12:20.395 "bdev_name": "AIO0" 00:12:20.395 } 00:12:20.395 ]' 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.395 18:39:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.653 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.911 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.169 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.438 18:39:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.704 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.960 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.218 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.476 18:39:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.734 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.992 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.253 18:39:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.511 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.769 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:24.334 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.335 18:39:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:24.592 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.593 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:24.593 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.593 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:24.593 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.593 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:24.593 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:24.862 /dev/nbd0 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:24.862 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.862 1+0 records in 00:12:24.862 1+0 records out 00:12:24.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694733 s, 5.9 MB/s 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:24.863 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:25.136 /dev/nbd1 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.136 1+0 records in 00:12:25.136 1+0 records out 00:12:25.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319822 s, 12.8 MB/s 00:12:25.136 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.394 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:25.394 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.394 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:25.394 18:39:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:25.394 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.394 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.394 18:39:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:25.651 /dev/nbd10 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.651 1+0 records in 00:12:25.651 1+0 records out 00:12:25.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538489 s, 7.6 MB/s 00:12:25.651 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.652 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:25.652 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.652 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:25.652 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:25.652 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.652 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.652 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:25.910 /dev/nbd11 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.910 1+0 records in 00:12:25.910 1+0 records out 00:12:25.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065422 s, 6.3 MB/s 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:25.910 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:26.168 /dev/nbd12 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.168 1+0 records in 00:12:26.168 1+0 records out 00:12:26.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732728 s, 5.6 MB/s 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.168 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:26.426 /dev/nbd13 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.426 1+0 records in 00:12:26.426 1+0 records out 00:12:26.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448844 s, 9.1 MB/s 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.426 18:39:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:26.684 /dev/nbd14 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.684 1+0 records in 00:12:26.684 1+0 records out 00:12:26.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473744 s, 8.6 MB/s 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.684 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:26.685 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.685 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.685 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:26.685 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.685 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.685 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:26.943 /dev/nbd15 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.943 1+0 records in 00:12:26.943 1+0 records out 00:12:26.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600029 s, 6.8 MB/s 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:26.943 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:27.202 /dev/nbd2 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.202 1+0 records in 00:12:27.202 1+0 records out 00:12:27.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434771 s, 9.4 MB/s 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.202 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:27.461 /dev/nbd3 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.461 1+0 records in 00:12:27.461 1+0 records out 00:12:27.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597343 s, 6.9 MB/s 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.461 18:39:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:27.721 /dev/nbd4 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.721 1+0 records in 00:12:27.721 1+0 records out 00:12:27.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703395 s, 5.8 MB/s 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.721 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:27.980 /dev/nbd5 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.980 1+0 records in 00:12:27.980 1+0 records out 00:12:27.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066429 s, 6.2 MB/s 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:27.980 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:28.238 /dev/nbd6 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.238 1+0 records in 00:12:28.238 1+0 records out 00:12:28.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632923 s, 6.5 MB/s 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.238 18:39:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:28.496 /dev/nbd7 00:12:28.496 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.754 1+0 records in 00:12:28.754 1+0 records out 00:12:28.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600015 s, 6.8 MB/s 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:28.754 /dev/nbd8 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:28.754 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.755 1+0 records in 00:12:28.755 1+0 records out 00:12:28.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079476 s, 5.2 MB/s 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:28.755 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:29.013 /dev/nbd9 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.013 1+0 records in 00:12:29.013 1+0 records out 00:12:29.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889255 s, 4.6 MB/s 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.013 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.014 18:39:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:29.014 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.014 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:29.014 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:29.014 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.014 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:29.272 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd0", 00:12:29.272 "bdev_name": "Malloc0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd1", 00:12:29.272 "bdev_name": "Malloc1p0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd10", 00:12:29.272 "bdev_name": "Malloc1p1" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd11", 00:12:29.272 "bdev_name": "Malloc2p0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd12", 00:12:29.272 "bdev_name": "Malloc2p1" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd13", 00:12:29.272 "bdev_name": "Malloc2p2" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd14", 00:12:29.272 "bdev_name": "Malloc2p3" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd15", 00:12:29.272 "bdev_name": "Malloc2p4" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd2", 00:12:29.272 "bdev_name": "Malloc2p5" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd3", 00:12:29.272 "bdev_name": "Malloc2p6" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd4", 00:12:29.272 "bdev_name": "Malloc2p7" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd5", 00:12:29.272 "bdev_name": "TestPT" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd6", 00:12:29.272 "bdev_name": "raid0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd7", 00:12:29.272 "bdev_name": "concat0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd8", 00:12:29.272 "bdev_name": "raid1" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd9", 00:12:29.272 "bdev_name": "AIO0" 00:12:29.272 } 00:12:29.272 ]' 00:12:29.272 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.272 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd0", 00:12:29.272 "bdev_name": "Malloc0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd1", 00:12:29.272 "bdev_name": "Malloc1p0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd10", 00:12:29.272 "bdev_name": "Malloc1p1" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd11", 00:12:29.272 "bdev_name": "Malloc2p0" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd12", 00:12:29.272 "bdev_name": "Malloc2p1" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd13", 00:12:29.272 "bdev_name": "Malloc2p2" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd14", 00:12:29.272 "bdev_name": "Malloc2p3" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd15", 00:12:29.272 "bdev_name": "Malloc2p4" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd2", 00:12:29.272 "bdev_name": "Malloc2p5" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd3", 00:12:29.272 "bdev_name": "Malloc2p6" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd4", 00:12:29.272 "bdev_name": "Malloc2p7" 00:12:29.272 }, 00:12:29.272 { 00:12:29.272 "nbd_device": "/dev/nbd5", 00:12:29.273 "bdev_name": "TestPT" 00:12:29.273 }, 00:12:29.273 { 00:12:29.273 "nbd_device": "/dev/nbd6", 00:12:29.273 "bdev_name": "raid0" 00:12:29.273 }, 00:12:29.273 { 00:12:29.273 "nbd_device": "/dev/nbd7", 00:12:29.273 "bdev_name": "concat0" 00:12:29.273 }, 00:12:29.273 { 00:12:29.273 "nbd_device": "/dev/nbd8", 00:12:29.273 "bdev_name": "raid1" 00:12:29.273 }, 00:12:29.273 { 00:12:29.273 "nbd_device": "/dev/nbd9", 00:12:29.273 "bdev_name": "AIO0" 00:12:29.273 } 00:12:29.273 ]' 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:29.531 /dev/nbd1 00:12:29.531 /dev/nbd10 00:12:29.531 /dev/nbd11 00:12:29.531 /dev/nbd12 00:12:29.531 /dev/nbd13 00:12:29.531 /dev/nbd14 00:12:29.531 /dev/nbd15 00:12:29.531 /dev/nbd2 00:12:29.531 /dev/nbd3 00:12:29.531 /dev/nbd4 00:12:29.531 /dev/nbd5 00:12:29.531 /dev/nbd6 00:12:29.531 /dev/nbd7 00:12:29.531 /dev/nbd8 00:12:29.531 /dev/nbd9' 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:29.531 /dev/nbd1 00:12:29.531 /dev/nbd10 00:12:29.531 /dev/nbd11 00:12:29.531 /dev/nbd12 00:12:29.531 /dev/nbd13 00:12:29.531 /dev/nbd14 00:12:29.531 /dev/nbd15 00:12:29.531 /dev/nbd2 00:12:29.531 /dev/nbd3 00:12:29.531 /dev/nbd4 00:12:29.531 /dev/nbd5 00:12:29.531 /dev/nbd6 00:12:29.531 /dev/nbd7 00:12:29.531 /dev/nbd8 00:12:29.531 /dev/nbd9' 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:29.531 256+0 records in 00:12:29.531 256+0 records out 00:12:29.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998207 s, 105 MB/s 00:12:29.531 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.532 18:39:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:29.532 256+0 records in 00:12:29.532 256+0 records out 00:12:29.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154703 s, 6.8 MB/s 00:12:29.532 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.532 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:29.790 256+0 records in 00:12:29.790 256+0 records out 00:12:29.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154212 s, 6.8 MB/s 00:12:29.790 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.790 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:29.790 256+0 records in 00:12:29.790 256+0 records out 00:12:29.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153447 s, 6.8 MB/s 00:12:29.790 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:29.790 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:30.049 256+0 records in 00:12:30.049 256+0 records out 00:12:30.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152888 s, 6.9 MB/s 00:12:30.049 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.049 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:30.307 256+0 records in 00:12:30.307 256+0 records out 00:12:30.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152655 s, 6.9 MB/s 00:12:30.307 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.307 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:30.307 256+0 records in 00:12:30.307 256+0 records out 00:12:30.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153239 s, 6.8 MB/s 00:12:30.307 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.307 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:30.565 256+0 records in 00:12:30.565 256+0 records out 00:12:30.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152343 s, 6.9 MB/s 00:12:30.565 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.565 18:39:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:30.823 256+0 records in 00:12:30.823 256+0 records out 00:12:30.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153706 s, 6.8 MB/s 00:12:30.823 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.823 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:30.823 256+0 records in 00:12:30.823 256+0 records out 00:12:30.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152705 s, 6.9 MB/s 00:12:30.823 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.823 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:31.082 256+0 records in 00:12:31.082 256+0 records out 00:12:31.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150791 s, 7.0 MB/s 00:12:31.082 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.082 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:31.082 256+0 records in 00:12:31.082 256+0 records out 00:12:31.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152464 s, 6.9 MB/s 00:12:31.082 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.082 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:31.340 256+0 records in 00:12:31.340 256+0 records out 00:12:31.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153059 s, 6.9 MB/s 00:12:31.340 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.340 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:31.598 256+0 records in 00:12:31.598 256+0 records out 00:12:31.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15344 s, 6.8 MB/s 00:12:31.598 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.598 18:39:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:31.598 256+0 records in 00:12:31.598 256+0 records out 00:12:31.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155068 s, 6.8 MB/s 00:12:31.598 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.598 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:31.857 256+0 records in 00:12:31.857 256+0 records out 00:12:31.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159399 s, 6.6 MB/s 00:12:31.857 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:31.857 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:32.115 256+0 records in 00:12:32.115 256+0 records out 00:12:32.115 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186617 s, 5.6 MB/s 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:32.115 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.116 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.374 18:39:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:32.633 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.891 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.148 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.406 18:39:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.665 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.923 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.182 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.439 18:39:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:34.439 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.698 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.957 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.215 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:35.473 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:35.473 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:35.473 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:35.473 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.473 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.473 18:39:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:35.473 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.473 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.473 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.473 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:35.731 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:35.731 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:35.731 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:35.731 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.731 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.732 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:35.732 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.732 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.732 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.732 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.990 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.248 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.506 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.506 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.506 18:39:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:36.506 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.507 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:36.507 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:36.507 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:36.507 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:36.765 malloc_lvol_verify 00:12:36.765 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:37.023 25e76f14-6f11-4088-a032-11f03ffb3086 00:12:37.023 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:37.281 20386968-a438-4426-96c5-7e9ebf54bf7e 00:12:37.281 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:37.539 /dev/nbd0 00:12:37.539 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:37.539 mke2fs 1.46.5 (30-Dec-2021) 00:12:37.539 00:12:37.539 Filesystem too small for a journal 00:12:37.539 Discarding device blocks: 0/1024 done 00:12:37.539 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:37.539 00:12:37.539 Allocating group tables: 0/1 done 00:12:37.539 Writing inode tables: 0/1 done 00:12:37.539 Writing superblocks and filesystem accounting information: 0/1 done 00:12:37.540 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.540 18:39:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 116245 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 116245 ']' 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 116245 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116245 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.798 killing process with pid 116245 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116245' 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 116245 00:12:37.798 18:39:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 116245 00:12:40.327 18:39:40 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:40.327 00:12:40.327 real 0m25.985s 00:12:40.327 user 0m32.961s 00:12:40.327 sys 0m11.630s 00:12:40.327 18:39:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.327 18:39:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:40.327 ************************************ 00:12:40.327 END TEST bdev_nbd 00:12:40.327 ************************************ 00:12:40.327 18:39:40 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:40.327 18:39:40 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:12:40.327 18:39:40 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:12:40.327 18:39:40 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:40.327 18:39:40 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:40.327 18:39:40 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.327 18:39:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:40.327 ************************************ 00:12:40.327 START TEST bdev_fio 00:12:40.327 ************************************ 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:40.327 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.327 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.328 18:39:40 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:40.328 ************************************ 00:12:40.328 START TEST bdev_fio_rw_verify 00:12:40.328 ************************************ 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:40.328 18:39:40 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:40.328 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:40.328 fio-3.35 00:12:40.328 Starting 16 threads 00:12:52.540 00:12:52.540 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=117423: Thu Jul 25 18:39:52 2024 00:12:52.540 read: IOPS=68.5k, BW=268MiB/s (281MB/s)(2676MiB/10001msec) 00:12:52.540 slat (nsec): min=1903, max=52011k, avg=42485.90, stdev=481422.95 00:12:52.540 clat (usec): min=7, max=52283, avg=347.67, stdev=1391.60 00:12:52.540 lat (usec): min=21, max=52287, avg=390.15, stdev=1472.20 00:12:52.540 clat percentiles (usec): 00:12:52.540 | 50.000th=[ 204], 99.000th=[ 1778], 99.900th=[16450], 99.990th=[28181], 00:12:52.540 | 99.999th=[52167] 00:12:52.540 write: IOPS=108k, BW=421MiB/s (441MB/s)(4169MiB/9910msec); 0 zone resets 00:12:52.540 slat (usec): min=8, max=64065, avg=74.28, stdev=725.92 00:12:52.540 clat (usec): min=6, max=64361, avg=439.16, stdev=1662.01 00:12:52.540 lat (usec): min=27, max=64397, avg=513.44, stdev=1813.93 00:12:52.540 clat percentiles (usec): 00:12:52.540 | 50.000th=[ 249], 99.000th=[ 8717], 99.900th=[20579], 99.990th=[39060], 00:12:52.540 | 99.999th=[54264] 00:12:52.540 bw ( KiB/s): min=248792, max=673312, per=98.06%, avg=422420.42, stdev=7309.89, samples=304 00:12:52.540 iops : min=62198, max=168328, avg=105605.00, stdev=1827.48, samples=304 00:12:52.540 lat (usec) : 10=0.01%, 20=0.01%, 50=0.66%, 100=7.54%, 250=48.24% 00:12:52.540 lat (usec) : 500=39.28%, 750=2.43%, 1000=0.28% 00:12:52.540 lat (msec) : 2=0.32%, 4=0.12%, 10=0.26%, 20=0.76%, 50=0.11% 00:12:52.540 lat (msec) : 100=0.01% 00:12:52.540 cpu : usr=55.70%, sys=2.02%, ctx=255697, majf=2, minf=74014 00:12:52.540 IO depths : 1=11.1%, 2=23.5%, 4=52.2%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.540 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.540 issued rwts: total=685052,1067271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.540 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:52.540 00:12:52.540 Run status group 0 (all jobs): 00:12:52.540 READ: bw=268MiB/s (281MB/s), 268MiB/s-268MiB/s (281MB/s-281MB/s), io=2676MiB (2806MB), run=10001-10001msec 00:12:52.540 WRITE: bw=421MiB/s (441MB/s), 421MiB/s-421MiB/s (441MB/s-441MB/s), io=4169MiB (4372MB), run=9910-9910msec 00:12:54.562 ----------------------------------------------------- 00:12:54.562 Suppressions used: 00:12:54.562 count bytes template 00:12:54.562 16 140 /usr/src/fio/parse.c 00:12:54.562 11322 1086912 /usr/src/fio/iolog.c 00:12:54.562 1 904 libcrypto.so 00:12:54.562 ----------------------------------------------------- 00:12:54.562 00:12:54.823 00:12:54.823 real 0m14.663s 00:12:54.823 user 1m34.990s 00:12:54.823 sys 0m4.406s 00:12:54.823 18:39:55 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.823 18:39:55 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:54.823 ************************************ 00:12:54.823 END TEST bdev_fio_rw_verify 00:12:54.823 ************************************ 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:54.823 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:54.824 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:54.825 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "52c61ca7-1075-4064-91e3-15eaaf0dcdde"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "52c61ca7-1075-4064-91e3-15eaaf0dcdde",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "242a2af4-6ebf-557c-9244-9b0f53b4b62d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "242a2af4-6ebf-557c-9244-9b0f53b4b62d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1f5e17c1-6262-529a-b6ba-4fe54cfcea64"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1f5e17c1-6262-529a-b6ba-4fe54cfcea64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c869e3e9-18ba-5e76-b6db-fe4801adf807"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c869e3e9-18ba-5e76-b6db-fe4801adf807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "88967992-8eab-57db-906b-1dd8747517bc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "88967992-8eab-57db-906b-1dd8747517bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "66753044-af15-5910-a7d4-ecf4aa576518"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66753044-af15-5910-a7d4-ecf4aa576518",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4bdc699c-e210-5e33-8c43-bf51682fedfc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4bdc699c-e210-5e33-8c43-bf51682fedfc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9cddb254-3f6c-58cc-a12d-7eb4a447c231"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9cddb254-3f6c-58cc-a12d-7eb4a447c231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4db4b2fa-e33c-51e6-8973-0ba133f73e9f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4db4b2fa-e33c-51e6-8973-0ba133f73e9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "b1f233be-b5db-56c8-bb8e-4638dc1fb139"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b1f233be-b5db-56c8-bb8e-4638dc1fb139",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "121f017d-9d04-5148-a371-6eda2a650486"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "121f017d-9d04-5148-a371-6eda2a650486",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2d4a7067-69e6-52b2-963e-95b2050f9992"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2d4a7067-69e6-52b2-963e-95b2050f9992",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4a5b3a3b-9fdc-407b-a3a1-221f8a569fec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "df345052-567b-40ac-9d4e-9be0e7f494f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "6f5dc2b2-33be-4b8d-9e55-bee079227419"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6f5dc2b2-33be-4b8d-9e55-bee079227419",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6f5dc2b2-33be-4b8d-9e55-bee079227419",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "94b1637d-df7f-4aaa-bceb-e586480f1475",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a9002dc1-0994-4ea6-aa4d-2e238e16b513",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ca16d4ac-f761-48c3-82df-2ceafee19fa2"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca16d4ac-f761-48c3-82df-2ceafee19fa2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ca16d4ac-f761-48c3-82df-2ceafee19fa2",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "77ce8126-7843-4046-8e13-400bab53b9a9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ec8e43a9-169b-4478-b9d6-44ebbafabc20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "faed24f7-f832-454a-b0ae-25bf1d490be8"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "faed24f7-f832-454a-b0ae-25bf1d490be8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:54.825 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:12:54.825 Malloc1p0 00:12:54.825 Malloc1p1 00:12:54.825 Malloc2p0 00:12:54.825 Malloc2p1 00:12:54.825 Malloc2p2 00:12:54.825 Malloc2p3 00:12:54.825 Malloc2p4 00:12:54.825 Malloc2p5 00:12:54.825 Malloc2p6 00:12:54.825 Malloc2p7 00:12:54.825 TestPT 00:12:54.825 raid0 00:12:54.825 concat0 ]] 00:12:54.825 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "52c61ca7-1075-4064-91e3-15eaaf0dcdde"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "52c61ca7-1075-4064-91e3-15eaaf0dcdde",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "242a2af4-6ebf-557c-9244-9b0f53b4b62d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "242a2af4-6ebf-557c-9244-9b0f53b4b62d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "1f5e17c1-6262-529a-b6ba-4fe54cfcea64"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1f5e17c1-6262-529a-b6ba-4fe54cfcea64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c869e3e9-18ba-5e76-b6db-fe4801adf807"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c869e3e9-18ba-5e76-b6db-fe4801adf807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "88967992-8eab-57db-906b-1dd8747517bc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "88967992-8eab-57db-906b-1dd8747517bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "66753044-af15-5910-a7d4-ecf4aa576518"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66753044-af15-5910-a7d4-ecf4aa576518",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4bdc699c-e210-5e33-8c43-bf51682fedfc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4bdc699c-e210-5e33-8c43-bf51682fedfc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "9cddb254-3f6c-58cc-a12d-7eb4a447c231"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9cddb254-3f6c-58cc-a12d-7eb4a447c231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4db4b2fa-e33c-51e6-8973-0ba133f73e9f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4db4b2fa-e33c-51e6-8973-0ba133f73e9f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "b1f233be-b5db-56c8-bb8e-4638dc1fb139"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b1f233be-b5db-56c8-bb8e-4638dc1fb139",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "121f017d-9d04-5148-a371-6eda2a650486"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "121f017d-9d04-5148-a371-6eda2a650486",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2d4a7067-69e6-52b2-963e-95b2050f9992"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2d4a7067-69e6-52b2-963e-95b2050f9992",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "abd5b98d-34a0-4149-a02a-aeb56dd4cd7f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4a5b3a3b-9fdc-407b-a3a1-221f8a569fec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "df345052-567b-40ac-9d4e-9be0e7f494f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "6f5dc2b2-33be-4b8d-9e55-bee079227419"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6f5dc2b2-33be-4b8d-9e55-bee079227419",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6f5dc2b2-33be-4b8d-9e55-bee079227419",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "94b1637d-df7f-4aaa-bceb-e586480f1475",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a9002dc1-0994-4ea6-aa4d-2e238e16b513",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ca16d4ac-f761-48c3-82df-2ceafee19fa2"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca16d4ac-f761-48c3-82df-2ceafee19fa2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ca16d4ac-f761-48c3-82df-2ceafee19fa2",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "77ce8126-7843-4046-8e13-400bab53b9a9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ec8e43a9-169b-4478-b9d6-44ebbafabc20",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "faed24f7-f832-454a-b0ae-25bf1d490be8"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "faed24f7-f832-454a-b0ae-25bf1d490be8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.827 18:39:55 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:54.827 ************************************ 00:12:54.827 START TEST bdev_fio_trim 00:12:54.827 ************************************ 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:54.827 18:39:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:55.086 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.086 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.087 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.087 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.087 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.087 fio-3.35 00:12:55.087 Starting 14 threads 00:13:07.290 00:13:07.290 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=117655: Thu Jul 25 18:40:07 2024 00:13:07.290 write: IOPS=130k, BW=508MiB/s (533MB/s)(5089MiB/10011msec); 0 zone resets 00:13:07.290 slat (usec): min=2, max=28042, avg=37.63, stdev=391.75 00:13:07.290 clat (usec): min=14, max=28333, avg=272.85, stdev=1082.46 00:13:07.290 lat (usec): min=31, max=28361, avg=310.48, stdev=1150.60 00:13:07.290 clat percentiles (usec): 00:13:07.290 | 50.000th=[ 184], 99.000th=[ 449], 99.900th=[16319], 99.990th=[20055], 00:13:07.290 | 99.999th=[24249] 00:13:07.290 bw ( KiB/s): min=346696, max=774320, per=99.78%, avg=519381.98, stdev=9181.83, samples=267 00:13:07.290 iops : min=86674, max=193580, avg=129845.60, stdev=2295.46, samples=267 00:13:07.290 trim: IOPS=130k, BW=508MiB/s (533MB/s)(5089MiB/10011msec); 0 zone resets 00:13:07.290 slat (usec): min=4, max=24047, avg=26.47, stdev=325.01 00:13:07.290 clat (usec): min=3, max=28361, avg=298.68, stdev=1106.96 00:13:07.290 lat (usec): min=11, max=28382, avg=325.15, stdev=1153.48 00:13:07.290 clat percentiles (usec): 00:13:07.290 | 50.000th=[ 208], 99.000th=[ 441], 99.900th=[16319], 99.990th=[20317], 00:13:07.290 | 99.999th=[24249] 00:13:07.290 bw ( KiB/s): min=346696, max=774264, per=99.78%, avg=519384.08, stdev=9181.36, samples=267 00:13:07.290 iops : min=86674, max=193566, avg=129846.02, stdev=2295.33, samples=267 00:13:07.290 lat (usec) : 4=0.01%, 10=0.04%, 20=0.14%, 50=0.83%, 100=6.19% 00:13:07.290 lat (usec) : 250=66.44%, 500=25.58%, 750=0.13%, 1000=0.04% 00:13:07.290 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.57%, 50=0.01% 00:13:07.290 cpu : usr=69.43%, sys=0.45%, ctx=173248, majf=0, minf=771 00:13:07.290 IO depths : 1=12.4%, 2=24.8%, 4=50.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.290 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.290 issued rwts: total=0,1302775,1302778,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.290 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:07.290 00:13:07.290 Run status group 0 (all jobs): 00:13:07.290 WRITE: bw=508MiB/s (533MB/s), 508MiB/s-508MiB/s (533MB/s-533MB/s), io=5089MiB (5336MB), run=10011-10011msec 00:13:07.290 TRIM: bw=508MiB/s (533MB/s), 508MiB/s-508MiB/s (533MB/s-533MB/s), io=5089MiB (5336MB), run=10011-10011msec 00:13:09.195 ----------------------------------------------------- 00:13:09.195 Suppressions used: 00:13:09.195 count bytes template 00:13:09.195 14 129 /usr/src/fio/parse.c 00:13:09.195 1 904 libcrypto.so 00:13:09.195 ----------------------------------------------------- 00:13:09.195 00:13:09.454 00:13:09.454 real 0m14.433s 00:13:09.454 user 1m42.798s 00:13:09.454 sys 0m1.554s 00:13:09.454 18:40:09 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.454 18:40:09 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:13:09.454 ************************************ 00:13:09.454 END TEST bdev_fio_trim 00:13:09.454 ************************************ 00:13:09.454 18:40:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:13:09.454 18:40:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.454 /home/vagrant/spdk_repo/spdk 00:13:09.454 18:40:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:13:09.454 18:40:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:13:09.454 00:13:09.454 real 0m29.482s 00:13:09.454 user 3m17.972s 00:13:09.454 sys 0m6.145s 00:13:09.454 18:40:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.454 18:40:09 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:09.454 ************************************ 00:13:09.454 END TEST bdev_fio 00:13:09.454 ************************************ 00:13:09.454 18:40:09 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:09.454 18:40:09 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:09.454 18:40:09 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:09.454 18:40:09 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.454 18:40:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:09.454 ************************************ 00:13:09.454 START TEST bdev_verify 00:13:09.454 ************************************ 00:13:09.454 18:40:09 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:09.454 [2024-07-25 18:40:09.998602] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:09.454 [2024-07-25 18:40:09.998774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117847 ] 00:13:09.713 [2024-07-25 18:40:10.167093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:09.971 [2024-07-25 18:40:10.406156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.971 [2024-07-25 18:40:10.406156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.538 [2024-07-25 18:40:10.911977] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:10.538 [2024-07-25 18:40:10.912084] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:10.538 [2024-07-25 18:40:10.919911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:10.538 [2024-07-25 18:40:10.919968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:10.538 [2024-07-25 18:40:10.927926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:10.538 [2024-07-25 18:40:10.928032] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:10.538 [2024-07-25 18:40:10.928058] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:10.796 [2024-07-25 18:40:11.179562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:10.796 [2024-07-25 18:40:11.179664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.796 [2024-07-25 18:40:11.179709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:10.796 [2024-07-25 18:40:11.179734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.796 [2024-07-25 18:40:11.182531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.796 [2024-07-25 18:40:11.182579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:11.363 Running I/O for 5 seconds... 00:13:16.632 00:13:16.632 Latency(us) 00:13:16.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.632 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x1000 00:13:16.632 Malloc0 : 5.15 1392.51 5.44 0.00 0.00 91777.93 635.86 325557.88 00:13:16.632 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x1000 length 0x1000 00:13:16.632 Malloc0 : 5.15 1366.59 5.34 0.00 0.00 93514.51 635.86 365503.63 00:13:16.632 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x800 00:13:16.632 Malloc1p0 : 5.15 720.86 2.82 0.00 0.00 176821.69 2933.52 184749.10 00:13:16.632 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x800 length 0x800 00:13:16.632 Malloc1p0 : 5.15 720.11 2.81 0.00 0.00 176999.63 2902.31 185747.75 00:13:16.632 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x800 00:13:16.632 Malloc1p1 : 5.15 720.55 2.81 0.00 0.00 176530.26 2855.50 181753.17 00:13:16.632 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x800 length 0x800 00:13:16.632 Malloc1p1 : 5.16 719.65 2.81 0.00 0.00 176749.21 2839.89 181753.17 00:13:16.632 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p0 : 5.15 720.10 2.81 0.00 0.00 176274.95 2886.70 177758.60 00:13:16.632 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p0 : 5.16 719.18 2.81 0.00 0.00 176503.83 2917.91 178757.24 00:13:16.632 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p1 : 5.16 719.64 2.81 0.00 0.00 176021.17 2855.50 174762.67 00:13:16.632 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p1 : 5.16 718.71 2.81 0.00 0.00 176257.95 2886.70 175761.31 00:13:16.632 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p2 : 5.16 719.17 2.81 0.00 0.00 175785.68 2886.70 170768.09 00:13:16.632 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p2 : 5.17 718.26 2.81 0.00 0.00 176008.10 2871.10 171766.74 00:13:16.632 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p3 : 5.16 718.70 2.81 0.00 0.00 175539.45 2761.87 168770.80 00:13:16.632 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p3 : 5.17 717.80 2.80 0.00 0.00 175755.93 2855.50 168770.80 00:13:16.632 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p4 : 5.17 718.25 2.81 0.00 0.00 175292.39 2839.89 163777.58 00:13:16.632 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p4 : 5.17 717.33 2.80 0.00 0.00 175510.17 2839.89 164776.23 00:13:16.632 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p5 : 5.17 717.79 2.80 0.00 0.00 175037.87 2871.10 160781.65 00:13:16.632 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p5 : 5.18 716.88 2.80 0.00 0.00 175252.56 2871.10 161780.30 00:13:16.632 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p6 : 5.17 717.32 2.80 0.00 0.00 174797.84 2824.29 157785.72 00:13:16.632 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p6 : 5.18 716.47 2.80 0.00 0.00 175009.10 2839.89 158784.37 00:13:16.632 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x200 00:13:16.632 Malloc2p7 : 5.18 716.87 2.80 0.00 0.00 174550.86 2871.10 154789.79 00:13:16.632 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x200 length 0x200 00:13:16.632 Malloc2p7 : 5.18 716.23 2.80 0.00 0.00 174715.64 2855.50 154789.79 00:13:16.632 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x1000 00:13:16.632 TestPT : 5.23 710.03 2.77 0.00 0.00 175528.19 14168.26 153791.15 00:13:16.632 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x1000 length 0x1000 00:13:16.632 TestPT : 5.23 690.20 2.70 0.00 0.00 179650.28 14542.75 229688.08 00:13:16.632 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x2000 00:13:16.632 raid0 : 5.24 732.64 2.86 0.00 0.00 169892.78 3027.14 136814.20 00:13:16.632 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x2000 length 0x2000 00:13:16.632 raid0 : 5.19 715.81 2.80 0.00 0.00 173835.07 2995.93 131820.98 00:13:16.632 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x2000 00:13:16.632 concat0 : 5.24 732.36 2.86 0.00 0.00 169582.64 3042.74 131820.98 00:13:16.632 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x2000 length 0x2000 00:13:16.632 concat0 : 5.24 733.03 2.86 0.00 0.00 169417.10 3027.14 127327.09 00:13:16.632 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x1000 00:13:16.632 raid1 : 5.24 732.15 2.86 0.00 0.00 169236.38 3729.31 126328.44 00:13:16.632 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x1000 length 0x1000 00:13:16.632 raid1 : 5.24 732.76 2.86 0.00 0.00 169086.41 3729.31 123332.51 00:13:16.632 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x0 length 0x4e2 00:13:16.632 AIO0 : 5.25 731.80 2.86 0.00 0.00 168559.14 2559.02 137812.85 00:13:16.632 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:16.632 Verification LBA range: start 0x4e2 length 0x4e2 00:13:16.632 AIO0 : 5.24 732.39 2.86 0.00 0.00 168418.50 2699.46 139810.13 00:13:16.632 =================================================================================================================== 00:13:16.632 Total : 24372.12 95.20 0.00 0.00 165065.95 635.86 365503.63 00:13:19.164 00:13:19.164 real 0m9.813s 00:13:19.164 user 0m16.624s 00:13:19.164 sys 0m0.722s 00:13:19.164 18:40:19 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.164 18:40:19 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:19.164 ************************************ 00:13:19.164 END TEST bdev_verify 00:13:19.164 ************************************ 00:13:19.423 18:40:19 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:19.423 18:40:19 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:19.423 18:40:19 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.423 18:40:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.423 ************************************ 00:13:19.423 START TEST bdev_verify_big_io 00:13:19.423 ************************************ 00:13:19.423 18:40:19 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:19.423 [2024-07-25 18:40:19.886680] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:19.423 [2024-07-25 18:40:19.886918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117986 ] 00:13:19.682 [2024-07-25 18:40:20.074574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.940 [2024-07-25 18:40:20.307933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.940 [2024-07-25 18:40:20.307933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.508 [2024-07-25 18:40:20.795654] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:20.508 [2024-07-25 18:40:20.795743] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:20.508 [2024-07-25 18:40:20.803599] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:20.508 [2024-07-25 18:40:20.803647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:20.508 [2024-07-25 18:40:20.811631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:20.508 [2024-07-25 18:40:20.811725] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:20.508 [2024-07-25 18:40:20.811767] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:20.508 [2024-07-25 18:40:21.068128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:20.508 [2024-07-25 18:40:21.068221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.508 [2024-07-25 18:40:21.068277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:20.508 [2024-07-25 18:40:21.068305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.508 [2024-07-25 18:40:21.071142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.508 [2024-07-25 18:40:21.071201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:21.075 [2024-07-25 18:40:21.524544] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.528718] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.533924] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.539019] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.543303] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.548317] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.552727] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.557757] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:21.075 [2024-07-25 18:40:21.562080] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:21.076 [2024-07-25 18:40:21.567273] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:21.076 [2024-07-25 18:40:21.571653] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:21.076 [2024-07-25 18:40:21.576754] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:21.076 [2024-07-25 18:40:21.580992] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:21.076 [2024-07-25 18:40:21.586136] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:21.076 [2024-07-25 18:40:21.591343] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:21.076 [2024-07-25 18:40:21.595750] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:21.335 [2024-07-25 18:40:21.705423] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:21.335 [2024-07-25 18:40:21.714214] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:21.335 Running I/O for 5 seconds... 00:13:27.903 00:13:27.903 Latency(us) 00:13:27.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.903 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x100 00:13:27.903 Malloc0 : 5.60 297.36 18.58 0.00 0.00 424539.91 670.96 1198372.57 00:13:27.903 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x100 length 0x100 00:13:27.903 Malloc0 : 5.61 296.63 18.54 0.00 0.00 425469.20 643.66 1406090.48 00:13:27.903 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x80 00:13:27.903 Malloc1p0 : 5.84 112.33 7.02 0.00 0.00 1079595.85 2933.52 1829515.46 00:13:27.903 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x80 length 0x80 00:13:27.903 Malloc1p0 : 5.90 92.19 5.76 0.00 0.00 1313866.91 2621.44 2005276.77 00:13:27.903 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x80 00:13:27.903 Malloc1p1 : 6.14 52.12 3.26 0.00 0.00 2233715.97 1302.92 3563161.11 00:13:27.903 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x80 length 0x80 00:13:27.903 Malloc1p1 : 6.12 54.91 3.43 0.00 0.00 2128306.55 1295.12 3291530.00 00:13:27.903 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p0 : 5.76 38.86 2.43 0.00 0.00 752946.49 624.15 1334188.13 00:13:27.903 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p0 : 5.76 41.63 2.60 0.00 0.00 703935.41 651.46 1174405.12 00:13:27.903 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p1 : 5.84 41.08 2.57 0.00 0.00 714952.06 604.65 1310220.68 00:13:27.903 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p1 : 5.77 41.62 2.60 0.00 0.00 699873.26 651.46 1158426.82 00:13:27.903 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p2 : 5.84 41.06 2.57 0.00 0.00 710661.16 608.55 1294242.38 00:13:27.903 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p2 : 5.77 41.61 2.60 0.00 0.00 695977.59 631.95 1142448.52 00:13:27.903 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p3 : 5.85 41.05 2.57 0.00 0.00 706550.79 608.55 1286253.23 00:13:27.903 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p3 : 5.84 43.82 2.74 0.00 0.00 662865.03 624.15 1126470.22 00:13:27.903 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p4 : 5.85 41.03 2.56 0.00 0.00 702951.76 651.46 1270274.93 00:13:27.903 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p4 : 5.84 43.80 2.74 0.00 0.00 659496.95 631.95 1110491.92 00:13:27.903 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p5 : 5.85 41.01 2.56 0.00 0.00 699219.65 604.65 1254296.62 00:13:27.903 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p5 : 5.85 43.78 2.74 0.00 0.00 655973.66 624.15 1094513.62 00:13:27.903 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p6 : 5.85 40.99 2.56 0.00 0.00 695623.96 608.55 1238318.32 00:13:27.903 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p6 : 5.85 43.76 2.74 0.00 0.00 652531.07 624.15 1086524.46 00:13:27.903 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x20 00:13:27.903 Malloc2p7 : 5.86 40.98 2.56 0.00 0.00 691730.26 592.94 1222340.02 00:13:27.903 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x20 length 0x20 00:13:27.903 Malloc2p7 : 5.85 43.74 2.73 0.00 0.00 648931.94 659.26 1070546.16 00:13:27.903 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x100 00:13:27.903 TestPT : 6.17 54.46 3.40 0.00 0.00 2012923.21 1170.29 3323486.60 00:13:27.903 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x100 length 0x100 00:13:27.903 TestPT : 6.16 52.61 3.29 0.00 0.00 2090546.77 75397.61 2812180.97 00:13:27.903 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x200 00:13:27.903 raid0 : 6.18 57.00 3.56 0.00 0.00 1893524.98 1302.92 3211638.49 00:13:27.903 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x200 length 0x200 00:13:27.903 raid0 : 6.08 60.53 3.78 0.00 0.00 1793065.82 1326.32 2971963.98 00:13:27.903 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x200 00:13:27.903 concat0 : 6.15 62.49 3.91 0.00 0.00 1697894.31 1302.92 3099790.38 00:13:27.903 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x200 length 0x200 00:13:27.903 concat0 : 6.12 65.33 4.08 0.00 0.00 1636755.52 1295.12 2860115.87 00:13:27.903 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.903 Verification LBA range: start 0x0 length 0x100 00:13:27.903 raid1 : 6.17 76.47 4.78 0.00 0.00 1378405.12 1739.82 2987942.28 00:13:27.903 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.904 Verification LBA range: start 0x100 length 0x100 00:13:27.904 raid1 : 6.16 70.12 4.38 0.00 0.00 1503009.96 1771.03 2748267.76 00:13:27.904 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:27.904 Verification LBA range: start 0x0 length 0x4e 00:13:27.904 AIO0 : 6.18 77.55 4.85 0.00 0.00 814382.15 1630.60 1837504.61 00:13:27.904 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:27.904 Verification LBA range: start 0x4e length 0x4e 00:13:27.904 AIO0 : 6.16 85.00 5.31 0.00 0.00 745035.93 1014.25 1637775.85 00:13:27.904 =================================================================================================================== 00:13:27.904 Total : 2236.93 139.81 0.00 0.00 990033.73 592.94 3563161.11 00:13:31.249 00:13:31.249 real 0m11.299s 00:13:31.249 user 0m20.424s 00:13:31.249 sys 0m0.750s 00:13:31.249 18:40:31 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.249 18:40:31 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.249 ************************************ 00:13:31.249 END TEST bdev_verify_big_io 00:13:31.249 ************************************ 00:13:31.249 18:40:31 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:31.249 18:40:31 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:31.249 18:40:31 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.249 18:40:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:31.249 ************************************ 00:13:31.249 START TEST bdev_write_zeroes 00:13:31.249 ************************************ 00:13:31.249 18:40:31 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:31.249 [2024-07-25 18:40:31.253568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:31.249 [2024-07-25 18:40:31.254045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118149 ] 00:13:31.249 [2024-07-25 18:40:31.443024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.249 [2024-07-25 18:40:31.677846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.816 [2024-07-25 18:40:32.162597] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:31.816 [2024-07-25 18:40:32.162690] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:31.816 [2024-07-25 18:40:32.170529] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:31.816 [2024-07-25 18:40:32.170575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:31.816 [2024-07-25 18:40:32.178554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:31.816 [2024-07-25 18:40:32.178628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:31.816 [2024-07-25 18:40:32.178673] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:32.074 [2024-07-25 18:40:32.438415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:32.074 [2024-07-25 18:40:32.438522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.074 [2024-07-25 18:40:32.438571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:32.074 [2024-07-25 18:40:32.438606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.074 [2024-07-25 18:40:32.441284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.074 [2024-07-25 18:40:32.441349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:32.332 Running I/O for 1 seconds... 00:13:33.708 00:13:33.708 Latency(us) 00:13:33.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.708 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc0 : 1.02 6375.78 24.91 0.00 0.00 20063.99 667.06 35202.19 00:13:33.708 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc1p0 : 1.02 6368.95 24.88 0.00 0.00 20054.91 850.41 34453.21 00:13:33.708 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc1p1 : 1.03 6362.71 24.85 0.00 0.00 20036.68 784.09 33704.23 00:13:33.708 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p0 : 1.03 6356.61 24.83 0.00 0.00 20021.98 827.00 32955.25 00:13:33.708 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p1 : 1.03 6350.48 24.81 0.00 0.00 20000.25 787.99 32206.26 00:13:33.708 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p2 : 1.04 6376.21 24.91 0.00 0.00 19881.92 838.70 31332.45 00:13:33.708 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p3 : 1.04 6369.69 24.88 0.00 0.00 19864.59 784.09 30583.47 00:13:33.708 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p4 : 1.05 6363.63 24.86 0.00 0.00 19847.56 799.70 29834.48 00:13:33.708 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p5 : 1.05 6357.61 24.83 0.00 0.00 19836.83 795.79 29085.50 00:13:33.708 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p6 : 1.05 6351.64 24.81 0.00 0.00 19821.66 830.90 28336.52 00:13:33.708 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 Malloc2p7 : 1.05 6345.62 24.79 0.00 0.00 19800.66 795.79 27587.54 00:13:33.708 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 TestPT : 1.05 6339.59 24.76 0.00 0.00 19788.16 827.00 26838.55 00:13:33.708 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 raid0 : 1.05 6332.76 24.74 0.00 0.00 19765.26 1240.50 25590.25 00:13:33.708 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 concat0 : 1.05 6325.90 24.71 0.00 0.00 19730.94 1256.11 24217.11 00:13:33.708 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 raid1 : 1.05 6317.45 24.68 0.00 0.00 19693.62 2075.31 22219.82 00:13:33.708 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.708 AIO0 : 1.05 6290.44 24.57 0.00 0.00 19693.49 1419.95 21470.84 00:13:33.708 =================================================================================================================== 00:13:33.708 Total : 101585.05 396.82 0.00 0.00 19867.94 667.06 35202.19 00:13:36.240 00:13:36.240 real 0m5.454s 00:13:36.240 user 0m4.688s 00:13:36.240 sys 0m0.561s 00:13:36.240 18:40:36 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.240 18:40:36 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:36.240 ************************************ 00:13:36.240 END TEST bdev_write_zeroes 00:13:36.240 ************************************ 00:13:36.240 18:40:36 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.240 18:40:36 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:36.240 18:40:36 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.240 18:40:36 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:36.240 ************************************ 00:13:36.240 START TEST bdev_json_nonenclosed 00:13:36.240 ************************************ 00:13:36.240 18:40:36 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:36.240 [2024-07-25 18:40:36.783097] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:36.240 [2024-07-25 18:40:36.784103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118231 ] 00:13:36.541 [2024-07-25 18:40:36.971187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.799 [2024-07-25 18:40:37.206378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.799 [2024-07-25 18:40:37.206526] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:36.799 [2024-07-25 18:40:37.206582] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:36.799 [2024-07-25 18:40:37.206618] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:37.366 00:13:37.366 real 0m1.013s 00:13:37.366 user 0m0.744s 00:13:37.366 sys 0m0.168s 00:13:37.366 18:40:37 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.366 18:40:37 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:37.366 ************************************ 00:13:37.366 END TEST bdev_json_nonenclosed 00:13:37.366 ************************************ 00:13:37.366 18:40:37 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.366 18:40:37 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:37.366 18:40:37 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.366 18:40:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:37.366 ************************************ 00:13:37.366 START TEST bdev_json_nonarray 00:13:37.366 ************************************ 00:13:37.366 18:40:37 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:37.366 [2024-07-25 18:40:37.871565] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:37.367 [2024-07-25 18:40:37.872375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118269 ] 00:13:37.625 [2024-07-25 18:40:38.059514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.882 [2024-07-25 18:40:38.304658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.882 [2024-07-25 18:40:38.304791] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:37.882 [2024-07-25 18:40:38.304846] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:37.882 [2024-07-25 18:40:38.304876] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:38.450 00:13:38.450 real 0m1.019s 00:13:38.450 user 0m0.736s 00:13:38.450 sys 0m0.182s 00:13:38.450 18:40:38 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.450 18:40:38 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:38.450 ************************************ 00:13:38.450 END TEST bdev_json_nonarray 00:13:38.450 ************************************ 00:13:38.450 18:40:38 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:13:38.450 18:40:38 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:13:38.450 18:40:38 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:38.450 18:40:38 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.450 18:40:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:38.450 ************************************ 00:13:38.450 START TEST bdev_qos 00:13:38.450 ************************************ 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=118307 00:13:38.450 Process qos testing pid: 118307 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 118307' 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 118307 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 118307 ']' 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:38.450 18:40:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:38.450 [2024-07-25 18:40:38.964101] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:38.450 [2024-07-25 18:40:38.964344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118307 ] 00:13:38.709 [2024-07-25 18:40:39.154338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.967 [2024-07-25 18:40:39.462679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.535 18:40:39 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.535 18:40:39 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:13:39.535 18:40:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:39.535 18:40:39 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.535 18:40:39 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:39.535 Malloc_0 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.535 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:39.535 [ 00:13:39.535 { 00:13:39.535 "name": "Malloc_0", 00:13:39.535 "aliases": [ 00:13:39.535 "33af21ab-fc4e-4f77-848a-24d51d7661fe" 00:13:39.535 ], 00:13:39.535 "product_name": "Malloc disk", 00:13:39.793 "block_size": 512, 00:13:39.793 "num_blocks": 262144, 00:13:39.793 "uuid": "33af21ab-fc4e-4f77-848a-24d51d7661fe", 00:13:39.793 "assigned_rate_limits": { 00:13:39.793 "rw_ios_per_sec": 0, 00:13:39.793 "rw_mbytes_per_sec": 0, 00:13:39.793 "r_mbytes_per_sec": 0, 00:13:39.793 "w_mbytes_per_sec": 0 00:13:39.793 }, 00:13:39.793 "claimed": false, 00:13:39.793 "zoned": false, 00:13:39.793 "supported_io_types": { 00:13:39.793 "read": true, 00:13:39.793 "write": true, 00:13:39.793 "unmap": true, 00:13:39.793 "flush": true, 00:13:39.793 "reset": true, 00:13:39.793 "nvme_admin": false, 00:13:39.793 "nvme_io": false, 00:13:39.793 "nvme_io_md": false, 00:13:39.793 "write_zeroes": true, 00:13:39.793 "zcopy": true, 00:13:39.793 "get_zone_info": false, 00:13:39.793 "zone_management": false, 00:13:39.793 "zone_append": false, 00:13:39.793 "compare": false, 00:13:39.793 "compare_and_write": false, 00:13:39.793 "abort": true, 00:13:39.793 "seek_hole": false, 00:13:39.793 "seek_data": false, 00:13:39.793 "copy": true, 00:13:39.793 "nvme_iov_md": false 00:13:39.793 }, 00:13:39.793 "memory_domains": [ 00:13:39.793 { 00:13:39.793 "dma_device_id": "system", 00:13:39.793 "dma_device_type": 1 00:13:39.793 }, 00:13:39.793 { 00:13:39.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.793 "dma_device_type": 2 00:13:39.793 } 00:13:39.793 ], 00:13:39.793 "driver_specific": {} 00:13:39.793 } 00:13:39.793 ] 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:39.793 Null_1 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.793 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:39.793 [ 00:13:39.793 { 00:13:39.793 "name": "Null_1", 00:13:39.793 "aliases": [ 00:13:39.793 "a7d3ef60-cb6c-40e0-9da6-56911385ef49" 00:13:39.793 ], 00:13:39.793 "product_name": "Null disk", 00:13:39.793 "block_size": 512, 00:13:39.793 "num_blocks": 262144, 00:13:39.793 "uuid": "a7d3ef60-cb6c-40e0-9da6-56911385ef49", 00:13:39.794 "assigned_rate_limits": { 00:13:39.794 "rw_ios_per_sec": 0, 00:13:39.794 "rw_mbytes_per_sec": 0, 00:13:39.794 "r_mbytes_per_sec": 0, 00:13:39.794 "w_mbytes_per_sec": 0 00:13:39.794 }, 00:13:39.794 "claimed": false, 00:13:39.794 "zoned": false, 00:13:39.794 "supported_io_types": { 00:13:39.794 "read": true, 00:13:39.794 "write": true, 00:13:39.794 "unmap": false, 00:13:39.794 "flush": false, 00:13:39.794 "reset": true, 00:13:39.794 "nvme_admin": false, 00:13:39.794 "nvme_io": false, 00:13:39.794 "nvme_io_md": false, 00:13:39.794 "write_zeroes": true, 00:13:39.794 "zcopy": false, 00:13:39.794 "get_zone_info": false, 00:13:39.794 "zone_management": false, 00:13:39.794 "zone_append": false, 00:13:39.794 "compare": false, 00:13:39.794 "compare_and_write": false, 00:13:39.794 "abort": true, 00:13:39.794 "seek_hole": false, 00:13:39.794 "seek_data": false, 00:13:39.794 "copy": false, 00:13:39.794 "nvme_iov_md": false 00:13:39.794 }, 00:13:39.794 "driver_specific": {} 00:13:39.794 } 00:13:39.794 ] 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:39.794 18:40:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:13:39.794 Running I/O for 60 seconds... 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 85959.62 343838.47 0.00 0.00 348160.00 0.00 0.00 ' 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=85959.62 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 85959 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=85959 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=21000 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 21000 -gt 1000 ']' 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 21000 Malloc_0 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 21000 IOPS Malloc_0 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.091 18:40:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:45.091 ************************************ 00:13:45.091 START TEST bdev_qos_iops 00:13:45.091 ************************************ 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 21000 IOPS Malloc_0 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=21000 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:45.091 18:40:45 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 21002.62 84010.49 0.00 0.00 85008.00 0.00 0.00 ' 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=21002.62 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 21002 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=21002 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:13:50.357 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=18900 00:13:50.358 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=23100 00:13:50.358 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 21002 -lt 18900 ']' 00:13:50.358 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 21002 -gt 23100 ']' 00:13:50.358 00:13:50.358 real 0m5.225s 00:13:50.358 user 0m0.124s 00:13:50.358 sys 0m0.041s 00:13:50.358 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.358 18:40:50 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:13:50.358 ************************************ 00:13:50.358 END TEST bdev_qos_iops 00:13:50.358 ************************************ 00:13:50.358 18:40:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:13:50.358 18:40:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:13:50.358 18:40:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:13:50.358 18:40:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:50.358 18:40:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:50.358 18:40:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:13:50.358 18:40:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 33666.10 134664.40 0.00 0.00 136192.00 0.00 0.00 ' 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=136192.00 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 136192 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=136192 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=13 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 13 -lt 2 ']' 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 13 Null_1 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 13 BANDWIDTH Null_1 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.635 18:40:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.635 ************************************ 00:13:55.635 START TEST bdev_qos_bw 00:13:55.635 ************************************ 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 13 BANDWIDTH Null_1 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=13 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:13:55.635 18:40:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 3326.60 13306.42 0.00 0.00 13620.00 0.00 0.00 ' 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=13620.00 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 13620 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=13620 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=13312 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=11980 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=14643 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 13620 -lt 11980 ']' 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 13620 -gt 14643 ']' 00:14:00.904 00:14:00.904 real 0m5.242s 00:14:00.904 user 0m0.116s 00:14:00.904 sys 0m0.042s 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:14:00.904 ************************************ 00:14:00.904 END TEST bdev_qos_bw 00:14:00.904 ************************************ 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.904 18:41:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.904 ************************************ 00:14:00.904 START TEST bdev_qos_ro_bw 00:14:00.904 ************************************ 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:00.904 18:41:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 511.04 2044.17 0.00 0.00 2064.00 0.00 0.00 ' 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2064.00 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2064 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2064 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -lt 1843 ']' 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -gt 2252 ']' 00:14:06.184 00:14:06.184 real 0m5.189s 00:14:06.184 user 0m0.126s 00:14:06.184 sys 0m0.033s 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.184 18:41:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:14:06.184 ************************************ 00:14:06.184 END TEST bdev_qos_ro_bw 00:14:06.184 ************************************ 00:14:06.184 18:41:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:06.184 18:41:06 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.184 18:41:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:06.762 00:14:06.762 Latency(us) 00:14:06.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.762 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:06.762 Malloc_0 : 26.77 28937.35 113.04 0.00 0.00 8762.76 1833.45 505313.77 00:14:06.762 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:06.762 Null_1 : 27.00 30570.21 119.41 0.00 0.00 8359.69 616.35 220700.28 00:14:06.762 =================================================================================================================== 00:14:06.762 Total : 59507.56 232.45 0.00 0.00 8554.81 616.35 505313.77 00:14:06.762 0 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 118307 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 118307 ']' 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 118307 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118307 00:14:06.762 killing process with pid 118307 00:14:06.762 Received shutdown signal, test time was about 27.041897 seconds 00:14:06.762 00:14:06.762 Latency(us) 00:14:06.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.762 =================================================================================================================== 00:14:06.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118307' 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 118307 00:14:06.762 18:41:07 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 118307 00:14:08.664 ************************************ 00:14:08.664 END TEST bdev_qos 00:14:08.664 ************************************ 00:14:08.664 18:41:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:14:08.664 00:14:08.664 real 0m30.115s 00:14:08.664 user 0m30.714s 00:14:08.664 sys 0m0.896s 00:14:08.664 18:41:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.664 18:41:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:08.664 18:41:09 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:08.664 18:41:09 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.664 18:41:09 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.664 18:41:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:08.664 ************************************ 00:14:08.664 START TEST bdev_qd_sampling 00:14:08.664 ************************************ 00:14:08.664 18:41:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=118789 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 118789' 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:08.665 Process bdev QD sampling period testing pid: 118789 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 118789 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 118789 ']' 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.665 18:41:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:08.665 [2024-07-25 18:41:09.158169] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:08.665 [2024-07-25 18:41:09.158401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118789 ] 00:14:08.924 [2024-07-25 18:41:09.353736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.183 [2024-07-25 18:41:09.649907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.183 [2024-07-25 18:41:09.649908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:09.751 Malloc_QD 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:09.751 [ 00:14:09.751 { 00:14:09.751 "name": "Malloc_QD", 00:14:09.751 "aliases": [ 00:14:09.751 "2b6fb1c3-e6cf-4d53-abbe-18b38646891c" 00:14:09.751 ], 00:14:09.751 "product_name": "Malloc disk", 00:14:09.751 "block_size": 512, 00:14:09.751 "num_blocks": 262144, 00:14:09.751 "uuid": "2b6fb1c3-e6cf-4d53-abbe-18b38646891c", 00:14:09.751 "assigned_rate_limits": { 00:14:09.751 "rw_ios_per_sec": 0, 00:14:09.751 "rw_mbytes_per_sec": 0, 00:14:09.751 "r_mbytes_per_sec": 0, 00:14:09.751 "w_mbytes_per_sec": 0 00:14:09.751 }, 00:14:09.751 "claimed": false, 00:14:09.751 "zoned": false, 00:14:09.751 "supported_io_types": { 00:14:09.751 "read": true, 00:14:09.751 "write": true, 00:14:09.751 "unmap": true, 00:14:09.751 "flush": true, 00:14:09.751 "reset": true, 00:14:09.751 "nvme_admin": false, 00:14:09.751 "nvme_io": false, 00:14:09.751 "nvme_io_md": false, 00:14:09.751 "write_zeroes": true, 00:14:09.751 "zcopy": true, 00:14:09.751 "get_zone_info": false, 00:14:09.751 "zone_management": false, 00:14:09.751 "zone_append": false, 00:14:09.751 "compare": false, 00:14:09.751 "compare_and_write": false, 00:14:09.751 "abort": true, 00:14:09.751 "seek_hole": false, 00:14:09.751 "seek_data": false, 00:14:09.751 "copy": true, 00:14:09.751 "nvme_iov_md": false 00:14:09.751 }, 00:14:09.751 "memory_domains": [ 00:14:09.751 { 00:14:09.751 "dma_device_id": "system", 00:14:09.751 "dma_device_type": 1 00:14:09.751 }, 00:14:09.751 { 00:14:09.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.751 "dma_device_type": 2 00:14:09.751 } 00:14:09.751 ], 00:14:09.751 "driver_specific": {} 00:14:09.751 } 00:14:09.751 ] 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:14:09.751 18:41:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:10.010 Running I/O for 5 seconds... 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:14:11.914 "tick_rate": 2100000000, 00:14:11.914 "ticks": 1638256758322, 00:14:11.914 "bdevs": [ 00:14:11.914 { 00:14:11.914 "name": "Malloc_QD", 00:14:11.914 "bytes_read": 909152768, 00:14:11.914 "num_read_ops": 221955, 00:14:11.914 "bytes_written": 0, 00:14:11.914 "num_write_ops": 0, 00:14:11.914 "bytes_unmapped": 0, 00:14:11.914 "num_unmap_ops": 0, 00:14:11.914 "bytes_copied": 0, 00:14:11.914 "num_copy_ops": 0, 00:14:11.914 "read_latency_ticks": 2048788179674, 00:14:11.914 "max_read_latency_ticks": 13863214, 00:14:11.914 "min_read_latency_ticks": 320422, 00:14:11.914 "write_latency_ticks": 0, 00:14:11.914 "max_write_latency_ticks": 0, 00:14:11.914 "min_write_latency_ticks": 0, 00:14:11.914 "unmap_latency_ticks": 0, 00:14:11.914 "max_unmap_latency_ticks": 0, 00:14:11.914 "min_unmap_latency_ticks": 0, 00:14:11.914 "copy_latency_ticks": 0, 00:14:11.914 "max_copy_latency_ticks": 0, 00:14:11.914 "min_copy_latency_ticks": 0, 00:14:11.914 "io_error": {}, 00:14:11.914 "queue_depth_polling_period": 10, 00:14:11.914 "queue_depth": 512, 00:14:11.914 "io_time": 30, 00:14:11.914 "weighted_io_time": 15360 00:14:11.914 } 00:14:11.914 ] 00:14:11.914 }' 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.914 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:11.914 00:14:11.914 Latency(us) 00:14:11.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.914 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:11.914 Malloc_QD : 1.98 57868.99 226.05 0.00 0.00 4413.15 1053.26 5554.96 00:14:11.914 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:11.914 Malloc_QD : 1.98 58357.52 227.96 0.00 0.00 4376.49 663.16 6616.02 00:14:11.914 =================================================================================================================== 00:14:11.914 Total : 116226.51 454.01 0.00 0.00 4394.74 663.16 6616.02 00:14:12.173 0 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 118789 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 118789 ']' 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 118789 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118789 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118789' 00:14:12.173 killing process with pid 118789 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 118789 00:14:12.173 Received shutdown signal, test time was about 2.164394 seconds 00:14:12.173 00:14:12.173 Latency(us) 00:14:12.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.173 =================================================================================================================== 00:14:12.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.173 18:41:12 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 118789 00:14:14.075 18:41:14 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:14:14.075 00:14:14.075 real 0m5.246s 00:14:14.075 user 0m9.423s 00:14:14.075 sys 0m0.567s 00:14:14.075 18:41:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.075 18:41:14 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 ************************************ 00:14:14.075 END TEST bdev_qd_sampling 00:14:14.075 ************************************ 00:14:14.075 18:41:14 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:14:14.075 18:41:14 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:14.075 18:41:14 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.075 18:41:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 ************************************ 00:14:14.075 START TEST bdev_error 00:14:14.075 ************************************ 00:14:14.075 18:41:14 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:14:14.075 18:41:14 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:14:14.075 18:41:14 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:14:14.075 18:41:14 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:14:14.075 18:41:14 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=118897 00:14:14.075 18:41:14 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:14.075 18:41:14 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 118897' 00:14:14.075 Process error testing pid: 118897 00:14:14.075 18:41:14 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 118897 00:14:14.075 18:41:14 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 118897 ']' 00:14:14.075 18:41:14 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.075 18:41:14 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.075 18:41:14 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.075 18:41:14 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.075 18:41:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:14.075 [2024-07-25 18:41:14.473360] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:14.075 [2024-07-25 18:41:14.473877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118897 ] 00:14:14.334 [2024-07-25 18:41:14.660493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.334 [2024-07-25 18:41:14.899696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.900 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.900 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:14:14.900 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:14.900 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.900 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.159 Dev_1 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.159 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.159 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.159 [ 00:14:15.159 { 00:14:15.159 "name": "Dev_1", 00:14:15.159 "aliases": [ 00:14:15.159 "7e5991f9-fdcb-40eb-a2bc-d497bbec3ba8" 00:14:15.159 ], 00:14:15.159 "product_name": "Malloc disk", 00:14:15.159 "block_size": 512, 00:14:15.159 "num_blocks": 262144, 00:14:15.159 "uuid": "7e5991f9-fdcb-40eb-a2bc-d497bbec3ba8", 00:14:15.159 "assigned_rate_limits": { 00:14:15.159 "rw_ios_per_sec": 0, 00:14:15.159 "rw_mbytes_per_sec": 0, 00:14:15.159 "r_mbytes_per_sec": 0, 00:14:15.159 "w_mbytes_per_sec": 0 00:14:15.159 }, 00:14:15.159 "claimed": false, 00:14:15.159 "zoned": false, 00:14:15.159 "supported_io_types": { 00:14:15.159 "read": true, 00:14:15.159 "write": true, 00:14:15.159 "unmap": true, 00:14:15.159 "flush": true, 00:14:15.160 "reset": true, 00:14:15.160 "nvme_admin": false, 00:14:15.160 "nvme_io": false, 00:14:15.160 "nvme_io_md": false, 00:14:15.160 "write_zeroes": true, 00:14:15.160 "zcopy": true, 00:14:15.160 "get_zone_info": false, 00:14:15.160 "zone_management": false, 00:14:15.160 "zone_append": false, 00:14:15.160 "compare": false, 00:14:15.160 "compare_and_write": false, 00:14:15.160 "abort": true, 00:14:15.160 "seek_hole": false, 00:14:15.160 "seek_data": false, 00:14:15.160 "copy": true, 00:14:15.160 "nvme_iov_md": false 00:14:15.160 }, 00:14:15.160 "memory_domains": [ 00:14:15.160 { 00:14:15.160 "dma_device_id": "system", 00:14:15.160 "dma_device_type": 1 00:14:15.160 }, 00:14:15.160 { 00:14:15.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.160 "dma_device_type": 2 00:14:15.160 } 00:14:15.160 ], 00:14:15.160 "driver_specific": {} 00:14:15.160 } 00:14:15.160 ] 00:14:15.160 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.160 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:15.160 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:14:15.160 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.160 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.160 true 00:14:15.160 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.160 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:15.160 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.160 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.418 Dev_2 00:14:15.418 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.418 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 [ 00:14:15.419 { 00:14:15.419 "name": "Dev_2", 00:14:15.419 "aliases": [ 00:14:15.419 "43a82bea-e5fa-4e20-8283-75098fe753fa" 00:14:15.419 ], 00:14:15.419 "product_name": "Malloc disk", 00:14:15.419 "block_size": 512, 00:14:15.419 "num_blocks": 262144, 00:14:15.419 "uuid": "43a82bea-e5fa-4e20-8283-75098fe753fa", 00:14:15.419 "assigned_rate_limits": { 00:14:15.419 "rw_ios_per_sec": 0, 00:14:15.419 "rw_mbytes_per_sec": 0, 00:14:15.419 "r_mbytes_per_sec": 0, 00:14:15.419 "w_mbytes_per_sec": 0 00:14:15.419 }, 00:14:15.419 "claimed": false, 00:14:15.419 "zoned": false, 00:14:15.419 "supported_io_types": { 00:14:15.419 "read": true, 00:14:15.419 "write": true, 00:14:15.419 "unmap": true, 00:14:15.419 "flush": true, 00:14:15.419 "reset": true, 00:14:15.419 "nvme_admin": false, 00:14:15.419 "nvme_io": false, 00:14:15.419 "nvme_io_md": false, 00:14:15.419 "write_zeroes": true, 00:14:15.419 "zcopy": true, 00:14:15.419 "get_zone_info": false, 00:14:15.419 "zone_management": false, 00:14:15.419 "zone_append": false, 00:14:15.419 "compare": false, 00:14:15.419 "compare_and_write": false, 00:14:15.419 "abort": true, 00:14:15.419 "seek_hole": false, 00:14:15.419 "seek_data": false, 00:14:15.419 "copy": true, 00:14:15.419 "nvme_iov_md": false 00:14:15.419 }, 00:14:15.419 "memory_domains": [ 00:14:15.419 { 00:14:15.419 "dma_device_id": "system", 00:14:15.419 "dma_device_type": 1 00:14:15.419 }, 00:14:15.419 { 00:14:15.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.419 "dma_device_type": 2 00:14:15.419 } 00:14:15.419 ], 00:14:15.419 "driver_specific": {} 00:14:15.419 } 00:14:15.419 ] 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:15.419 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:15.419 18:41:15 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.419 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:14:15.419 18:41:15 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:15.419 Running I/O for 5 seconds... 00:14:16.354 Process is existed as continue on error is set. Pid: 118897 00:14:16.354 18:41:16 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 118897 00:14:16.354 18:41:16 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 118897' 00:14:16.354 18:41:16 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:16.354 18:41:16 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.354 18:41:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:16.354 18:41:16 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.354 18:41:16 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:16.354 18:41:16 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.354 18:41:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:16.354 Timeout while waiting for response: 00:14:16.354 00:14:16.354 00:14:16.921 18:41:17 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.921 18:41:17 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:14:21.109 00:14:21.109 Latency(us) 00:14:21.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.109 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:21.109 EE_Dev_1 : 0.94 49802.58 194.54 5.34 0.00 318.85 121.42 569.54 00:14:21.109 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:21.109 Dev_2 : 5.00 98757.53 385.77 0.00 0.00 159.65 50.47 399457.52 00:14:21.109 =================================================================================================================== 00:14:21.109 Total : 148560.11 580.31 5.34 0.00 173.38 50.47 399457.52 00:14:21.677 18:41:22 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 118897 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 118897 ']' 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 118897 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 118897 00:14:21.677 killing process with pid 118897 00:14:21.677 Received shutdown signal, test time was about 5.000000 seconds 00:14:21.677 00:14:21.677 Latency(us) 00:14:21.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.677 =================================================================================================================== 00:14:21.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 118897' 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 118897 00:14:21.677 18:41:22 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 118897 00:14:23.580 Process error testing pid: 119019 00:14:23.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.580 18:41:24 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=119019 00:14:23.580 18:41:24 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:23.580 18:41:24 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 119019' 00:14:23.580 18:41:24 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 119019 00:14:23.580 18:41:24 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 119019 ']' 00:14:23.580 18:41:24 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.580 18:41:24 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.580 18:41:24 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.580 18:41:24 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.580 18:41:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:23.839 [2024-07-25 18:41:24.159618] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:23.839 [2024-07-25 18:41:24.160059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119019 ] 00:14:23.839 [2024-07-25 18:41:24.320307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.097 [2024-07-25 18:41:24.568856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.663 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.663 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:14:24.663 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:24.663 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.663 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:24.921 Dev_1 00:14:24.921 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.921 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:14:24.921 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:14:24.921 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:24.922 [ 00:14:24.922 { 00:14:24.922 "name": "Dev_1", 00:14:24.922 "aliases": [ 00:14:24.922 "3eec9350-5c0e-4338-94ce-69ae00ad24ef" 00:14:24.922 ], 00:14:24.922 "product_name": "Malloc disk", 00:14:24.922 "block_size": 512, 00:14:24.922 "num_blocks": 262144, 00:14:24.922 "uuid": "3eec9350-5c0e-4338-94ce-69ae00ad24ef", 00:14:24.922 "assigned_rate_limits": { 00:14:24.922 "rw_ios_per_sec": 0, 00:14:24.922 "rw_mbytes_per_sec": 0, 00:14:24.922 "r_mbytes_per_sec": 0, 00:14:24.922 "w_mbytes_per_sec": 0 00:14:24.922 }, 00:14:24.922 "claimed": false, 00:14:24.922 "zoned": false, 00:14:24.922 "supported_io_types": { 00:14:24.922 "read": true, 00:14:24.922 "write": true, 00:14:24.922 "unmap": true, 00:14:24.922 "flush": true, 00:14:24.922 "reset": true, 00:14:24.922 "nvme_admin": false, 00:14:24.922 "nvme_io": false, 00:14:24.922 "nvme_io_md": false, 00:14:24.922 "write_zeroes": true, 00:14:24.922 "zcopy": true, 00:14:24.922 "get_zone_info": false, 00:14:24.922 "zone_management": false, 00:14:24.922 "zone_append": false, 00:14:24.922 "compare": false, 00:14:24.922 "compare_and_write": false, 00:14:24.922 "abort": true, 00:14:24.922 "seek_hole": false, 00:14:24.922 "seek_data": false, 00:14:24.922 "copy": true, 00:14:24.922 "nvme_iov_md": false 00:14:24.922 }, 00:14:24.922 "memory_domains": [ 00:14:24.922 { 00:14:24.922 "dma_device_id": "system", 00:14:24.922 "dma_device_type": 1 00:14:24.922 }, 00:14:24.922 { 00:14:24.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.922 "dma_device_type": 2 00:14:24.922 } 00:14:24.922 ], 00:14:24.922 "driver_specific": {} 00:14:24.922 } 00:14:24.922 ] 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:24.922 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:24.922 true 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.922 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:24.922 Dev_2 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.922 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.922 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:25.180 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.180 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:25.180 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.180 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:25.180 [ 00:14:25.180 { 00:14:25.180 "name": "Dev_2", 00:14:25.180 "aliases": [ 00:14:25.180 "db5de4b8-1cb0-49f7-8690-5924369f4c12" 00:14:25.180 ], 00:14:25.180 "product_name": "Malloc disk", 00:14:25.180 "block_size": 512, 00:14:25.180 "num_blocks": 262144, 00:14:25.180 "uuid": "db5de4b8-1cb0-49f7-8690-5924369f4c12", 00:14:25.180 "assigned_rate_limits": { 00:14:25.180 "rw_ios_per_sec": 0, 00:14:25.180 "rw_mbytes_per_sec": 0, 00:14:25.181 "r_mbytes_per_sec": 0, 00:14:25.181 "w_mbytes_per_sec": 0 00:14:25.181 }, 00:14:25.181 "claimed": false, 00:14:25.181 "zoned": false, 00:14:25.181 "supported_io_types": { 00:14:25.181 "read": true, 00:14:25.181 "write": true, 00:14:25.181 "unmap": true, 00:14:25.181 "flush": true, 00:14:25.181 "reset": true, 00:14:25.181 "nvme_admin": false, 00:14:25.181 "nvme_io": false, 00:14:25.181 "nvme_io_md": false, 00:14:25.181 "write_zeroes": true, 00:14:25.181 "zcopy": true, 00:14:25.181 "get_zone_info": false, 00:14:25.181 "zone_management": false, 00:14:25.181 "zone_append": false, 00:14:25.181 "compare": false, 00:14:25.181 "compare_and_write": false, 00:14:25.181 "abort": true, 00:14:25.181 "seek_hole": false, 00:14:25.181 "seek_data": false, 00:14:25.181 "copy": true, 00:14:25.181 "nvme_iov_md": false 00:14:25.181 }, 00:14:25.181 "memory_domains": [ 00:14:25.181 { 00:14:25.181 "dma_device_id": "system", 00:14:25.181 "dma_device_type": 1 00:14:25.181 }, 00:14:25.181 { 00:14:25.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.181 "dma_device_type": 2 00:14:25.181 } 00:14:25.181 ], 00:14:25.181 "driver_specific": {} 00:14:25.181 } 00:14:25.181 ] 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:14:25.181 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.181 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 119019 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 119019 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:14:25.181 18:41:25 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.181 18:41:25 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 119019 00:14:25.181 Running I/O for 5 seconds... 00:14:25.181 task offset: 30584 on job bdev=EE_Dev_1 fails 00:14:25.181 00:14:25.181 Latency(us) 00:14:25.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.181 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:25.181 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:25.181 EE_Dev_1 : 0.00 33639.14 131.40 7645.26 0.00 304.14 118.98 557.84 00:14:25.181 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:25.181 Dev_2 : 0.00 23238.93 90.78 0.00 0.00 473.61 112.15 869.91 00:14:25.181 =================================================================================================================== 00:14:25.181 Total : 56878.07 222.18 7645.26 0.00 396.05 112.15 869.91 00:14:25.181 [2024-07-25 18:41:25.641316] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:25.181 request: 00:14:25.181 { 00:14:25.181 "method": "perform_tests", 00:14:25.181 "req_id": 1 00:14:25.181 } 00:14:25.181 Got JSON-RPC error response 00:14:25.181 response: 00:14:25.181 { 00:14:25.181 "code": -32603, 00:14:25.181 "message": "bdevperf failed with error Operation not permitted" 00:14:25.181 } 00:14:27.713 ************************************ 00:14:27.713 END TEST bdev_error 00:14:27.713 ************************************ 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.713 00:14:27.713 real 0m13.503s 00:14:27.713 user 0m13.241s 00:14:27.713 sys 0m1.161s 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.713 18:41:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:27.713 18:41:27 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:14:27.713 18:41:27 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:27.713 18:41:27 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.713 18:41:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:27.713 ************************************ 00:14:27.713 START TEST bdev_stat 00:14:27.713 ************************************ 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=119095 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 119095' 00:14:27.713 Process Bdev IO statistics testing pid: 119095 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 119095 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 119095 ']' 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.713 18:41:27 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:27.713 [2024-07-25 18:41:28.055837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:27.713 [2024-07-25 18:41:28.056257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119095 ] 00:14:27.713 [2024-07-25 18:41:28.251064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:27.973 [2024-07-25 18:41:28.542372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.973 [2024-07-25 18:41:28.542377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.539 18:41:28 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.539 18:41:28 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:14:28.539 18:41:28 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:28.539 18:41:28 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.539 18:41:28 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:28.833 Malloc_STAT 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:28.833 [ 00:14:28.833 { 00:14:28.833 "name": "Malloc_STAT", 00:14:28.833 "aliases": [ 00:14:28.833 "4c129b3f-d981-492b-913d-0fa81705a6e1" 00:14:28.833 ], 00:14:28.833 "product_name": "Malloc disk", 00:14:28.833 "block_size": 512, 00:14:28.833 "num_blocks": 262144, 00:14:28.833 "uuid": "4c129b3f-d981-492b-913d-0fa81705a6e1", 00:14:28.833 "assigned_rate_limits": { 00:14:28.833 "rw_ios_per_sec": 0, 00:14:28.833 "rw_mbytes_per_sec": 0, 00:14:28.833 "r_mbytes_per_sec": 0, 00:14:28.833 "w_mbytes_per_sec": 0 00:14:28.833 }, 00:14:28.833 "claimed": false, 00:14:28.833 "zoned": false, 00:14:28.833 "supported_io_types": { 00:14:28.833 "read": true, 00:14:28.833 "write": true, 00:14:28.833 "unmap": true, 00:14:28.833 "flush": true, 00:14:28.833 "reset": true, 00:14:28.833 "nvme_admin": false, 00:14:28.833 "nvme_io": false, 00:14:28.833 "nvme_io_md": false, 00:14:28.833 "write_zeroes": true, 00:14:28.833 "zcopy": true, 00:14:28.833 "get_zone_info": false, 00:14:28.833 "zone_management": false, 00:14:28.833 "zone_append": false, 00:14:28.833 "compare": false, 00:14:28.833 "compare_and_write": false, 00:14:28.833 "abort": true, 00:14:28.833 "seek_hole": false, 00:14:28.833 "seek_data": false, 00:14:28.833 "copy": true, 00:14:28.833 "nvme_iov_md": false 00:14:28.833 }, 00:14:28.833 "memory_domains": [ 00:14:28.833 { 00:14:28.833 "dma_device_id": "system", 00:14:28.833 "dma_device_type": 1 00:14:28.833 }, 00:14:28.833 { 00:14:28.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.833 "dma_device_type": 2 00:14:28.833 } 00:14:28.833 ], 00:14:28.833 "driver_specific": {} 00:14:28.833 } 00:14:28.833 ] 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:14:28.833 18:41:29 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:28.833 Running I/O for 10 seconds... 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:14:30.786 "tick_rate": 2100000000, 00:14:30.786 "ticks": 1677937992042, 00:14:30.786 "bdevs": [ 00:14:30.786 { 00:14:30.786 "name": "Malloc_STAT", 00:14:30.786 "bytes_read": 903909888, 00:14:30.786 "num_read_ops": 220675, 00:14:30.786 "bytes_written": 0, 00:14:30.786 "num_write_ops": 0, 00:14:30.786 "bytes_unmapped": 0, 00:14:30.786 "num_unmap_ops": 0, 00:14:30.786 "bytes_copied": 0, 00:14:30.786 "num_copy_ops": 0, 00:14:30.786 "read_latency_ticks": 2035352253570, 00:14:30.786 "max_read_latency_ticks": 12795040, 00:14:30.786 "min_read_latency_ticks": 297098, 00:14:30.786 "write_latency_ticks": 0, 00:14:30.786 "max_write_latency_ticks": 0, 00:14:30.786 "min_write_latency_ticks": 0, 00:14:30.786 "unmap_latency_ticks": 0, 00:14:30.786 "max_unmap_latency_ticks": 0, 00:14:30.786 "min_unmap_latency_ticks": 0, 00:14:30.786 "copy_latency_ticks": 0, 00:14:30.786 "max_copy_latency_ticks": 0, 00:14:30.786 "min_copy_latency_ticks": 0, 00:14:30.786 "io_error": {} 00:14:30.786 } 00:14:30.786 ] 00:14:30.786 }' 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=220675 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:14:30.786 "tick_rate": 2100000000, 00:14:30.786 "ticks": 1678076402342, 00:14:30.786 "name": "Malloc_STAT", 00:14:30.786 "channels": [ 00:14:30.786 { 00:14:30.786 "thread_id": 2, 00:14:30.786 "bytes_read": 465567744, 00:14:30.786 "num_read_ops": 113664, 00:14:30.786 "bytes_written": 0, 00:14:30.786 "num_write_ops": 0, 00:14:30.786 "bytes_unmapped": 0, 00:14:30.786 "num_unmap_ops": 0, 00:14:30.786 "bytes_copied": 0, 00:14:30.786 "num_copy_ops": 0, 00:14:30.786 "read_latency_ticks": 1052752736816, 00:14:30.786 "max_read_latency_ticks": 12992492, 00:14:30.786 "min_read_latency_ticks": 6691932, 00:14:30.786 "write_latency_ticks": 0, 00:14:30.786 "max_write_latency_ticks": 0, 00:14:30.786 "min_write_latency_ticks": 0, 00:14:30.786 "unmap_latency_ticks": 0, 00:14:30.786 "max_unmap_latency_ticks": 0, 00:14:30.786 "min_unmap_latency_ticks": 0, 00:14:30.786 "copy_latency_ticks": 0, 00:14:30.786 "max_copy_latency_ticks": 0, 00:14:30.786 "min_copy_latency_ticks": 0 00:14:30.786 }, 00:14:30.786 { 00:14:30.786 "thread_id": 3, 00:14:30.786 "bytes_read": 469762048, 00:14:30.786 "num_read_ops": 114688, 00:14:30.786 "bytes_written": 0, 00:14:30.786 "num_write_ops": 0, 00:14:30.786 "bytes_unmapped": 0, 00:14:30.786 "num_unmap_ops": 0, 00:14:30.786 "bytes_copied": 0, 00:14:30.786 "num_copy_ops": 0, 00:14:30.786 "read_latency_ticks": 1054667197814, 00:14:30.786 "max_read_latency_ticks": 11746580, 00:14:30.786 "min_read_latency_ticks": 5836216, 00:14:30.786 "write_latency_ticks": 0, 00:14:30.786 "max_write_latency_ticks": 0, 00:14:30.786 "min_write_latency_ticks": 0, 00:14:30.786 "unmap_latency_ticks": 0, 00:14:30.786 "max_unmap_latency_ticks": 0, 00:14:30.786 "min_unmap_latency_ticks": 0, 00:14:30.786 "copy_latency_ticks": 0, 00:14:30.786 "max_copy_latency_ticks": 0, 00:14:30.786 "min_copy_latency_ticks": 0 00:14:30.786 } 00:14:30.786 ] 00:14:30.786 }' 00:14:30.786 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=113664 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=113664 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=114688 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=228352 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:14:31.045 "tick_rate": 2100000000, 00:14:31.045 "ticks": 1678332624182, 00:14:31.045 "bdevs": [ 00:14:31.045 { 00:14:31.045 "name": "Malloc_STAT", 00:14:31.045 "bytes_read": 991990272, 00:14:31.045 "num_read_ops": 242179, 00:14:31.045 "bytes_written": 0, 00:14:31.045 "num_write_ops": 0, 00:14:31.045 "bytes_unmapped": 0, 00:14:31.045 "num_unmap_ops": 0, 00:14:31.045 "bytes_copied": 0, 00:14:31.045 "num_copy_ops": 0, 00:14:31.045 "read_latency_ticks": 2238524149010, 00:14:31.045 "max_read_latency_ticks": 12992492, 00:14:31.045 "min_read_latency_ticks": 297098, 00:14:31.045 "write_latency_ticks": 0, 00:14:31.045 "max_write_latency_ticks": 0, 00:14:31.045 "min_write_latency_ticks": 0, 00:14:31.045 "unmap_latency_ticks": 0, 00:14:31.045 "max_unmap_latency_ticks": 0, 00:14:31.045 "min_unmap_latency_ticks": 0, 00:14:31.045 "copy_latency_ticks": 0, 00:14:31.045 "max_copy_latency_ticks": 0, 00:14:31.045 "min_copy_latency_ticks": 0, 00:14:31.045 "io_error": {} 00:14:31.045 } 00:14:31.045 ] 00:14:31.045 }' 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=242179 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 228352 -lt 220675 ']' 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 228352 -gt 242179 ']' 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.045 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:31.045 00:14:31.045 Latency(us) 00:14:31.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.045 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:31.045 Malloc_STAT : 2.15 57611.84 225.05 0.00 0.00 4432.77 1201.49 6647.22 00:14:31.045 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:31.045 Malloc_STAT : 2.15 58433.69 228.26 0.00 0.00 4371.09 869.91 5617.37 00:14:31.045 =================================================================================================================== 00:14:31.045 Total : 116045.53 453.30 0.00 0.00 4401.71 869.91 6647.22 00:14:31.303 0 00:14:31.303 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 119095 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 119095 ']' 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 119095 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119095 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119095' 00:14:31.304 killing process with pid 119095 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 119095 00:14:31.304 Received shutdown signal, test time was about 2.333594 seconds 00:14:31.304 00:14:31.304 Latency(us) 00:14:31.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.304 =================================================================================================================== 00:14:31.304 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.304 18:41:31 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 119095 00:14:33.206 ************************************ 00:14:33.206 END TEST bdev_stat 00:14:33.206 ************************************ 00:14:33.206 18:41:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:14:33.206 00:14:33.206 real 0m5.423s 00:14:33.206 user 0m9.886s 00:14:33.206 sys 0m0.626s 00:14:33.206 18:41:33 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.206 18:41:33 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:33.206 18:41:33 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:14:33.206 18:41:33 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:14:33.206 18:41:33 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:33.206 18:41:33 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:14:33.207 18:41:33 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:33.207 18:41:33 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:33.207 18:41:33 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:14:33.207 18:41:33 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:14:33.207 18:41:33 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:14:33.207 18:41:33 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:14:33.207 00:14:33.207 real 2m35.729s 00:14:33.207 user 6m1.182s 00:14:33.207 sys 0m26.160s 00:14:33.207 ************************************ 00:14:33.207 END TEST blockdev_general 00:14:33.207 ************************************ 00:14:33.207 18:41:33 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.207 18:41:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 18:41:33 -- spdk/autotest.sh@194 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:33.207 18:41:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:33.207 18:41:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.207 18:41:33 -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 ************************************ 00:14:33.207 START TEST bdev_raid 00:14:33.207 ************************************ 00:14:33.207 18:41:33 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:33.207 * Looking for test storage... 00:14:33.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:33.207 18:41:33 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:33.207 18:41:33 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:14:33.207 18:41:33 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:33.207 18:41:33 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:14:33.207 18:41:33 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:14:33.207 18:41:33 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:14:33.207 18:41:33 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:14:33.207 18:41:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:33.207 18:41:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.207 18:41:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 ************************************ 00:14:33.207 START TEST raid0_resize_superblock_test 00:14:33.207 ************************************ 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=119254 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 119254' 00:14:33.207 Process raid pid: 119254 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 119254 /var/tmp/spdk-raid.sock 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 119254 ']' 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:33.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.207 18:41:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.207 [2024-07-25 18:41:33.734713] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:33.207 [2024-07-25 18:41:33.735184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.464 [2024-07-25 18:41:33.918022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.723 [2024-07-25 18:41:34.116556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.981 [2024-07-25 18:41:34.307092] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.239 18:41:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.239 18:41:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:34.239 18:41:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:14:34.806 malloc0 00:14:34.806 18:41:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:14:35.064 [2024-07-25 18:41:35.579863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:35.064 [2024-07-25 18:41:35.580191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.064 [2024-07-25 18:41:35.580275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:35.064 [2024-07-25 18:41:35.580386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.064 [2024-07-25 18:41:35.583075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.064 [2024-07-25 18:41:35.583251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:35.064 pt0 00:14:35.064 18:41:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:14:35.632 a7dc69af-5e7f-4f4c-a3d0-5bc735ac39f8 00:14:35.632 18:41:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:14:35.891 39c75af1-076e-4333-bfd0-fd9043993c8b 00:14:35.891 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:14:35.891 8162e879-c535-464e-b662-077a21c49891 00:14:35.891 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:14:35.891 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:14:36.150 [2024-07-25 18:41:36.570680] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 39c75af1-076e-4333-bfd0-fd9043993c8b is claimed 00:14:36.150 [2024-07-25 18:41:36.571071] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8162e879-c535-464e-b662-077a21c49891 is claimed 00:14:36.150 [2024-07-25 18:41:36.571258] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:14:36.150 [2024-07-25 18:41:36.571302] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:14:36.150 [2024-07-25 18:41:36.571554] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:36.150 [2024-07-25 18:41:36.572023] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:14:36.150 [2024-07-25 18:41:36.572132] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:14:36.150 [2024-07-25 18:41:36.572407] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.150 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:14:36.150 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:14:36.408 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:14:36.408 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:14:36.408 18:41:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:14:36.666 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:14:36.666 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:36.666 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:36.667 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:36.667 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:14:36.667 [2024-07-25 18:41:37.230941] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.925 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:36.925 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:36.925 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:14:36.925 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:14:36.925 [2024-07-25 18:41:37.483136] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:36.925 [2024-07-25 18:41:37.483333] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '39c75af1-076e-4333-bfd0-fd9043993c8b' was resized: old size 131072, new size 204800 00:14:37.184 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:14:37.184 [2024-07-25 18:41:37.727024] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:37.184 [2024-07-25 18:41:37.727221] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8162e879-c535-464e-b662-077a21c49891' was resized: old size 131072, new size 204800 00:14:37.184 [2024-07-25 18:41:37.727451] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:14:37.184 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:14:37.184 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:14:37.441 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:14:37.441 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:14:37.441 18:41:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:14:37.699 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:14:37.699 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:37.699 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:14:37.699 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:37.699 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:37.956 [2024-07-25 18:41:38.283084] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.956 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:37.956 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:37.956 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:14:37.956 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:14:37.956 [2024-07-25 18:41:38.462982] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:14:37.956 [2024-07-25 18:41:38.463278] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:14:37.956 [2024-07-25 18:41:38.463321] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.956 [2024-07-25 18:41:38.463390] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:14:37.956 [2024-07-25 18:41:38.463547] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.956 [2024-07-25 18:41:38.463666] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.956 [2024-07-25 18:41:38.463701] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:14:37.956 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:14:38.214 [2024-07-25 18:41:38.638963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:38.214 [2024-07-25 18:41:38.639195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.214 [2024-07-25 18:41:38.639268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:38.214 [2024-07-25 18:41:38.639387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.214 [2024-07-25 18:41:38.642105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.214 [2024-07-25 18:41:38.642265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:38.214 pt0 00:14:38.214 [2024-07-25 18:41:38.644223] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 39c75af1-076e-4333-bfd0-fd9043993c8b 00:14:38.214 [2024-07-25 18:41:38.644393] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 39c75af1-076e-4333-bfd0-fd9043993c8b is claimed 00:14:38.214 [2024-07-25 18:41:38.644549] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8162e879-c535-464e-b662-077a21c49891 00:14:38.214 [2024-07-25 18:41:38.644652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8162e879-c535-464e-b662-077a21c49891 is claimed 00:14:38.214 [2024-07-25 18:41:38.644806] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8162e879-c535-464e-b662-077a21c49891 (2) smaller than existing raid bdev Raid (3) 00:14:38.214 [2024-07-25 18:41:38.644958] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:14:38.214 [2024-07-25 18:41:38.644992] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:14:38.214 [2024-07-25 18:41:38.645089] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:38.214 [2024-07-25 18:41:38.645514] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:14:38.214 [2024-07-25 18:41:38.645615] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012d80 00:14:38.214 [2024-07-25 18:41:38.645818] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.214 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:38.214 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:38.214 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:38.214 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:14:38.481 [2024-07-25 18:41:38.819258] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 119254 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 119254 ']' 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 119254 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119254 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119254' 00:14:38.481 killing process with pid 119254 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 119254 00:14:38.481 [2024-07-25 18:41:38.867112] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.481 18:41:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 119254 00:14:38.481 [2024-07-25 18:41:38.867266] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.481 [2024-07-25 18:41:38.867316] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.481 [2024-07-25 18:41:38.867325] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Raid, state offline 00:14:39.862 [2024-07-25 18:41:40.134757] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.799 18:41:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:14:40.799 00:14:40.799 real 0m7.673s 00:14:40.799 user 0m10.451s 00:14:40.799 sys 0m1.259s 00:14:40.799 18:41:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.799 ************************************ 00:14:40.799 END TEST raid0_resize_superblock_test 00:14:40.799 ************************************ 00:14:40.799 18:41:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.058 18:41:41 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:14:41.058 18:41:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:41.058 18:41:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.058 18:41:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.058 ************************************ 00:14:41.058 START TEST raid1_resize_superblock_test 00:14:41.058 ************************************ 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=119404 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 119404' 00:14:41.058 Process raid pid: 119404 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 119404 /var/tmp/spdk-raid.sock 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 119404 ']' 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:41.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.058 18:41:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.058 [2024-07-25 18:41:41.485743] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:41.058 [2024-07-25 18:41:41.486248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.316 [2024-07-25 18:41:41.673204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.599 [2024-07-25 18:41:41.916485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.599 [2024-07-25 18:41:42.109907] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.858 18:41:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.858 18:41:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:41.858 18:41:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:14:42.833 malloc0 00:14:42.833 18:41:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:14:42.833 [2024-07-25 18:41:43.242986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:42.833 [2024-07-25 18:41:43.243266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.833 [2024-07-25 18:41:43.243348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:42.833 [2024-07-25 18:41:43.243452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.833 [2024-07-25 18:41:43.246298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.833 [2024-07-25 18:41:43.246483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:42.833 pt0 00:14:42.833 18:41:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:14:43.091 2c508dcf-04c3-4c06-ae69-bba617e8f53c 00:14:43.349 18:41:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:14:43.349 90013cc5-1825-4aa9-a8a5-172694aab259 00:14:43.606 18:41:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:14:43.606 5fa6cc00-520b-47dd-8dda-22b9f2e31258 00:14:43.606 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:14:43.606 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:14:43.864 [2024-07-25 18:41:44.275174] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 90013cc5-1825-4aa9-a8a5-172694aab259 is claimed 00:14:43.864 [2024-07-25 18:41:44.275522] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5fa6cc00-520b-47dd-8dda-22b9f2e31258 is claimed 00:14:43.864 [2024-07-25 18:41:44.275728] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:14:43.864 [2024-07-25 18:41:44.275768] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:14:43.864 [2024-07-25 18:41:44.276040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:43.864 [2024-07-25 18:41:44.276520] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:14:43.864 [2024-07-25 18:41:44.276632] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:14:43.864 [2024-07-25 18:41:44.276917] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.864 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:14:43.864 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:14:44.123 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:14:44.123 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:14:44.123 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:14:44.381 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:14:44.381 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:44.381 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:14:44.381 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:44.381 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:44.639 [2024-07-25 18:41:44.975445] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.639 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:44.639 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:14:44.639 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:14:44.639 18:41:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:14:44.639 [2024-07-25 18:41:45.163571] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:44.639 [2024-07-25 18:41:45.163762] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '90013cc5-1825-4aa9-a8a5-172694aab259' was resized: old size 131072, new size 204800 00:14:44.639 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:14:44.898 [2024-07-25 18:41:45.347556] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:44.898 [2024-07-25 18:41:45.347733] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5fa6cc00-520b-47dd-8dda-22b9f2e31258' was resized: old size 131072, new size 204800 00:14:44.898 [2024-07-25 18:41:45.347942] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:14:44.898 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:14:44.898 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:14:45.157 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:14:45.157 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:14:45.157 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:14:45.416 [2024-07-25 18:41:45.939681] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:14:45.416 18:41:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:14:45.675 [2024-07-25 18:41:46.171501] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:14:45.675 [2024-07-25 18:41:46.171785] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:14:45.675 [2024-07-25 18:41:46.171845] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:14:45.675 [2024-07-25 18:41:46.172095] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.675 [2024-07-25 18:41:46.172427] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.675 [2024-07-25 18:41:46.172599] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.675 [2024-07-25 18:41:46.172691] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:14:45.675 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:14:45.934 [2024-07-25 18:41:46.439482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:14:45.934 [2024-07-25 18:41:46.439738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.934 [2024-07-25 18:41:46.439815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:45.934 [2024-07-25 18:41:46.439931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.934 [2024-07-25 18:41:46.442692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.934 [2024-07-25 18:41:46.442873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:14:45.934 pt0 00:14:45.934 [2024-07-25 18:41:46.444751] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 90013cc5-1825-4aa9-a8a5-172694aab259 00:14:45.934 [2024-07-25 18:41:46.444926] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 90013cc5-1825-4aa9-a8a5-172694aab259 is claimed 00:14:45.934 [2024-07-25 18:41:46.445160] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5fa6cc00-520b-47dd-8dda-22b9f2e31258 00:14:45.934 [2024-07-25 18:41:46.445284] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5fa6cc00-520b-47dd-8dda-22b9f2e31258 is claimed 00:14:45.934 [2024-07-25 18:41:46.445463] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 5fa6cc00-520b-47dd-8dda-22b9f2e31258 (2) smaller than existing raid bdev Raid (3) 00:14:45.934 [2024-07-25 18:41:46.445584] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:14:45.934 [2024-07-25 18:41:46.445620] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:45.934 [2024-07-25 18:41:46.445720] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:45.934 [2024-07-25 18:41:46.446149] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:14:45.934 [2024-07-25 18:41:46.446248] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012d80 00:14:45.934 [2024-07-25 18:41:46.446477] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.934 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:45.934 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:45.934 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:45.934 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:14:46.193 [2024-07-25 18:41:46.615753] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 119404 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 119404 ']' 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 119404 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119404 00:14:46.193 killing process with pid 119404 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119404' 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 119404 00:14:46.193 [2024-07-25 18:41:46.660578] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.193 18:41:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 119404 00:14:46.193 [2024-07-25 18:41:46.660651] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.193 [2024-07-25 18:41:46.660711] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.193 [2024-07-25 18:41:46.660720] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Raid, state offline 00:14:47.569 [2024-07-25 18:41:47.929737] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.947 ************************************ 00:14:48.947 END TEST raid1_resize_superblock_test 00:14:48.947 ************************************ 00:14:48.947 18:41:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:14:48.947 00:14:48.947 real 0m7.721s 00:14:48.947 user 0m10.525s 00:14:48.947 sys 0m1.337s 00:14:48.947 18:41:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.947 18:41:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.947 18:41:49 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:14:48.947 18:41:49 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:14:48.947 18:41:49 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:14:48.947 18:41:49 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:14:48.947 18:41:49 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:14:48.947 18:41:49 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:48.947 18:41:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:48.947 18:41:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:48.947 18:41:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.947 ************************************ 00:14:48.947 START TEST raid_function_test_raid0 00:14:48.947 ************************************ 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=119559 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 119559' 00:14:48.947 Process raid pid: 119559 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 119559 /var/tmp/spdk-raid.sock 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 119559 ']' 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:48.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.947 18:41:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:48.947 [2024-07-25 18:41:49.303198] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:48.947 [2024-07-25 18:41:49.303677] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.947 [2024-07-25 18:41:49.496474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.206 [2024-07-25 18:41:49.752990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.464 [2024-07-25 18:41:49.944422] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.723 18:41:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.723 18:41:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:14:49.723 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:14:49.723 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:14:49.723 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:49.723 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:14:49.723 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:49.981 [2024-07-25 18:41:50.522232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:49.982 [2024-07-25 18:41:50.524666] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:49.982 [2024-07-25 18:41:50.524853] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:14:49.982 [2024-07-25 18:41:50.524939] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:49.982 [2024-07-25 18:41:50.525124] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:14:49.982 [2024-07-25 18:41:50.525683] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:14:49.982 [2024-07-25 18:41:50.525809] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000012a00 00:14:49.982 [2024-07-25 18:41:50.526107] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.982 Base_1 00:14:49.982 Base_2 00:14:49.982 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:49.982 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:49.982 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.240 18:41:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:50.499 [2024-07-25 18:41:50.966345] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:50.499 /dev/nbd0 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.499 1+0 records in 00:14:50.499 1+0 records out 00:14:50.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464799 s, 8.8 MB/s 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.499 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:50.500 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:50.758 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:50.758 { 00:14:50.758 "nbd_device": "/dev/nbd0", 00:14:50.758 "bdev_name": "raid" 00:14:50.758 } 00:14:50.758 ]' 00:14:50.758 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:50.758 { 00:14:50.758 "nbd_device": "/dev/nbd0", 00:14:50.758 "bdev_name": "raid" 00:14:50.758 } 00:14:50.758 ]' 00:14:50.758 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:51.017 4096+0 records in 00:14:51.017 4096+0 records out 00:14:51.017 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0309485 s, 67.8 MB/s 00:14:51.017 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:51.276 4096+0 records in 00:14:51.276 4096+0 records out 00:14:51.276 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.252478 s, 8.3 MB/s 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:51.276 128+0 records in 00:14:51.276 128+0 records out 00:14:51.276 65536 bytes (66 kB, 64 KiB) copied, 0.00123357 s, 53.1 MB/s 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:51.276 2035+0 records in 00:14:51.276 2035+0 records out 00:14:51.276 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.012396 s, 84.1 MB/s 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:51.276 456+0 records in 00:14:51.276 456+0 records out 00:14:51.276 233472 bytes (233 kB, 228 KiB) copied, 0.00269846 s, 86.5 MB/s 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.276 18:41:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:51.535 [2024-07-25 18:41:52.088643] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.535 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 119559 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 119559 ']' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 119559 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119559 00:14:52.102 killing process with pid 119559 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119559' 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 119559 00:14:52.102 [2024-07-25 18:41:52.473353] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.102 18:41:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 119559 00:14:52.102 [2024-07-25 18:41:52.473486] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.102 [2024-07-25 18:41:52.473547] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.103 [2024-07-25 18:41:52.473556] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid, state offline 00:14:52.103 [2024-07-25 18:41:52.644649] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.480 ************************************ 00:14:53.480 END TEST raid_function_test_raid0 00:14:53.480 ************************************ 00:14:53.480 18:41:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:14:53.480 00:14:53.480 real 0m4.638s 00:14:53.480 user 0m5.606s 00:14:53.480 sys 0m1.291s 00:14:53.480 18:41:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.480 18:41:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:53.480 18:41:53 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:14:53.480 18:41:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:53.480 18:41:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.480 18:41:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.480 ************************************ 00:14:53.480 START TEST raid_function_test_concat 00:14:53.480 ************************************ 00:14:53.480 18:41:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:14:53.480 18:41:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:14:53.480 18:41:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:53.480 18:41:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:53.480 18:41:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=119724 00:14:53.480 18:41:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:53.480 Process raid pid: 119724 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 119724' 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 119724 /var/tmp/spdk-raid.sock 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 119724 ']' 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:53.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.481 18:41:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:53.481 [2024-07-25 18:41:54.019403] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:53.481 [2024-07-25 18:41:54.019874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.750 [2024-07-25 18:41:54.206139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.048 [2024-07-25 18:41:54.428010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.306 [2024-07-25 18:41:54.624820] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.564 18:41:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.564 18:41:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:14:54.564 18:41:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:14:54.564 18:41:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:14:54.564 18:41:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:54.564 18:41:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:14:54.564 18:41:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:54.823 [2024-07-25 18:41:55.300354] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:54.823 [2024-07-25 18:41:55.302760] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:54.823 [2024-07-25 18:41:55.302941] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:14:54.823 [2024-07-25 18:41:55.303023] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:54.823 [2024-07-25 18:41:55.303237] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:14:54.823 [2024-07-25 18:41:55.303740] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:14:54.823 [2024-07-25 18:41:55.303848] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000012a00 00:14:54.823 [2024-07-25 18:41:55.304096] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.823 Base_1 00:14:54.823 Base_2 00:14:54.823 18:41:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:54.823 18:41:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:54.823 18:41:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.081 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:55.340 [2024-07-25 18:41:55.836531] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:55.340 /dev/nbd0 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.340 1+0 records in 00:14:55.340 1+0 records out 00:14:55.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104829 s, 3.9 MB/s 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:55.340 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:55.599 18:41:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:55.599 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:55.599 { 00:14:55.599 "nbd_device": "/dev/nbd0", 00:14:55.599 "bdev_name": "raid" 00:14:55.599 } 00:14:55.599 ]' 00:14:55.599 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:55.599 { 00:14:55.599 "nbd_device": "/dev/nbd0", 00:14:55.599 "bdev_name": "raid" 00:14:55.599 } 00:14:55.599 ]' 00:14:55.599 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:55.858 4096+0 records in 00:14:55.858 4096+0 records out 00:14:55.858 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0332821 s, 63.0 MB/s 00:14:55.858 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:56.116 4096+0 records in 00:14:56.116 4096+0 records out 00:14:56.116 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.236974 s, 8.8 MB/s 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:56.116 128+0 records in 00:14:56.116 128+0 records out 00:14:56.116 65536 bytes (66 kB, 64 KiB) copied, 0.000938237 s, 69.9 MB/s 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:56.116 2035+0 records in 00:14:56.116 2035+0 records out 00:14:56.116 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00970327 s, 107 MB/s 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:56.116 456+0 records in 00:14:56.116 456+0 records out 00:14:56.116 233472 bytes (233 kB, 228 KiB) copied, 0.00241938 s, 96.5 MB/s 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.116 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:56.374 [2024-07-25 18:41:56.900453] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:56.374 18:41:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:56.631 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:56.631 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:56.631 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:56.631 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:56.631 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:56.631 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 119724 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 119724 ']' 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 119724 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119724 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119724' 00:14:56.889 killing process with pid 119724 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 119724 00:14:56.889 [2024-07-25 18:41:57.247433] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.889 18:41:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 119724 00:14:56.889 [2024-07-25 18:41:57.247652] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.889 [2024-07-25 18:41:57.247715] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.889 [2024-07-25 18:41:57.247725] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid, state offline 00:14:56.889 [2024-07-25 18:41:57.419565] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.260 18:41:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:14:58.260 00:14:58.260 real 0m4.675s 00:14:58.260 user 0m5.710s 00:14:58.260 sys 0m1.302s 00:14:58.260 18:41:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.260 18:41:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:58.260 ************************************ 00:14:58.260 END TEST raid_function_test_concat 00:14:58.260 ************************************ 00:14:58.260 18:41:58 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:14:58.260 18:41:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:58.260 18:41:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.260 18:41:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.260 ************************************ 00:14:58.260 START TEST raid0_resize_test 00:14:58.260 ************************************ 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=119885 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 119885' 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:58.260 Process raid pid: 119885 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 119885 /var/tmp/spdk-raid.sock 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 119885 ']' 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:58.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.260 18:41:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.260 [2024-07-25 18:41:58.770843] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:58.260 [2024-07-25 18:41:58.771348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.519 [2024-07-25 18:41:58.957515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.777 [2024-07-25 18:41:59.154963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.777 [2024-07-25 18:41:59.346231] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.341 18:41:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.341 18:41:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:14:59.342 18:41:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:59.342 Base_1 00:14:59.600 18:41:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:59.858 Base_2 00:14:59.858 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:14:59.858 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:59.858 [2024-07-25 18:42:00.364234] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:59.858 [2024-07-25 18:42:00.366686] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:59.858 [2024-07-25 18:42:00.366902] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:14:59.858 [2024-07-25 18:42:00.366998] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:59.858 [2024-07-25 18:42:00.367206] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:59.858 [2024-07-25 18:42:00.367588] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:14:59.858 [2024-07-25 18:42:00.367623] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:14:59.858 [2024-07-25 18:42:00.367866] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.858 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:00.116 [2024-07-25 18:42:00.612231] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:00.116 [2024-07-25 18:42:00.612413] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:00.116 true 00:15:00.116 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:00.116 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:15:00.374 [2024-07-25 18:42:00.848401] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.374 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:15:00.374 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:15:00.374 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:15:00.374 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:15:00.374 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:15:00.374 18:42:00 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:00.632 [2024-07-25 18:42:01.032310] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:00.632 [2024-07-25 18:42:01.032519] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:00.632 [2024-07-25 18:42:01.032695] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:15:00.632 true 00:15:00.632 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:15:00.632 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:00.890 [2024-07-25 18:42:01.284489] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 119885 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 119885 ']' 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 119885 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119885 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119885' 00:15:00.890 killing process with pid 119885 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 119885 00:15:00.890 [2024-07-25 18:42:01.334150] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.890 18:42:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 119885 00:15:00.890 [2024-07-25 18:42:01.334331] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.890 [2024-07-25 18:42:01.334398] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.890 [2024-07-25 18:42:01.334407] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:15:00.890 [2024-07-25 18:42:01.335057] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.264 ************************************ 00:15:02.264 END TEST raid0_resize_test 00:15:02.264 ************************************ 00:15:02.264 18:42:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:15:02.264 00:15:02.264 real 0m3.829s 00:15:02.264 user 0m5.157s 00:15:02.264 sys 0m0.731s 00:15:02.264 18:42:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.264 18:42:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.264 18:42:02 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:15:02.264 18:42:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:02.264 18:42:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.264 18:42:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.264 ************************************ 00:15:02.264 START TEST raid1_resize_test 00:15:02.264 ************************************ 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=119971 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 119971' 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:02.264 Process raid pid: 119971 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 119971 /var/tmp/spdk-raid.sock 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 119971 ']' 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:02.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.264 18:42:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.264 [2024-07-25 18:42:02.670947] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:02.264 [2024-07-25 18:42:02.671419] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.522 [2024-07-25 18:42:02.857359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.522 [2024-07-25 18:42:03.073623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.779 [2024-07-25 18:42:03.266633] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.343 18:42:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.343 18:42:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:15:03.343 18:42:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:03.343 Base_1 00:15:03.343 18:42:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:03.600 Base_2 00:15:03.600 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:15:03.600 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:15:03.857 [2024-07-25 18:42:04.320917] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:03.857 [2024-07-25 18:42:04.323438] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:03.857 [2024-07-25 18:42:04.323648] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:03.857 [2024-07-25 18:42:04.323735] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:03.857 [2024-07-25 18:42:04.323985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:03.857 [2024-07-25 18:42:04.324444] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:03.857 [2024-07-25 18:42:04.324547] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000012a00 00:15:03.857 [2024-07-25 18:42:04.324863] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.857 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:04.115 [2024-07-25 18:42:04.505001] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:04.115 [2024-07-25 18:42:04.505165] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:04.115 true 00:15:04.115 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:04.115 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:15:04.373 [2024-07-25 18:42:04.693151] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:04.373 [2024-07-25 18:42:04.873070] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:04.373 [2024-07-25 18:42:04.873243] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:04.373 [2024-07-25 18:42:04.873420] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:15:04.373 true 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:04.373 18:42:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:15:04.631 [2024-07-25 18:42:05.137227] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 119971 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 119971 ']' 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 119971 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 119971 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 119971' 00:15:04.631 killing process with pid 119971 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 119971 00:15:04.631 [2024-07-25 18:42:05.190119] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.631 18:42:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 119971 00:15:04.631 [2024-07-25 18:42:05.190330] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.631 [2024-07-25 18:42:05.191003] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.631 [2024-07-25 18:42:05.191096] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Raid, state offline 00:15:04.631 [2024-07-25 18:42:05.191322] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.006 18:42:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:15:06.006 00:15:06.006 real 0m3.795s 00:15:06.006 user 0m5.091s 00:15:06.006 sys 0m0.740s 00:15:06.006 18:42:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.006 18:42:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.006 ************************************ 00:15:06.006 END TEST raid1_resize_test 00:15:06.006 ************************************ 00:15:06.006 18:42:06 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:15:06.006 18:42:06 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:06.006 18:42:06 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:06.006 18:42:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:06.006 18:42:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.006 18:42:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.006 ************************************ 00:15:06.006 START TEST raid_state_function_test 00:15:06.006 ************************************ 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=120063 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120063' 00:15:06.006 Process raid pid: 120063 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 120063 /var/tmp/spdk-raid.sock 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 120063 ']' 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.006 18:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.006 [2024-07-25 18:42:06.546759] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:06.006 [2024-07-25 18:42:06.547265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.271 [2024-07-25 18:42:06.736273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.529 [2024-07-25 18:42:06.956600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.788 [2024-07-25 18:42:07.147372] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.046 18:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.046 18:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:07.046 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:07.304 [2024-07-25 18:42:07.677989] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.304 [2024-07-25 18:42:07.678315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.304 [2024-07-25 18:42:07.678454] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.304 [2024-07-25 18:42:07.678523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.304 "name": "Existed_Raid", 00:15:07.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.304 "strip_size_kb": 64, 00:15:07.304 "state": "configuring", 00:15:07.304 "raid_level": "raid0", 00:15:07.304 "superblock": false, 00:15:07.304 "num_base_bdevs": 2, 00:15:07.304 "num_base_bdevs_discovered": 0, 00:15:07.304 "num_base_bdevs_operational": 2, 00:15:07.304 "base_bdevs_list": [ 00:15:07.304 { 00:15:07.304 "name": "BaseBdev1", 00:15:07.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.304 "is_configured": false, 00:15:07.304 "data_offset": 0, 00:15:07.304 "data_size": 0 00:15:07.304 }, 00:15:07.304 { 00:15:07.304 "name": "BaseBdev2", 00:15:07.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.304 "is_configured": false, 00:15:07.304 "data_offset": 0, 00:15:07.304 "data_size": 0 00:15:07.304 } 00:15:07.304 ] 00:15:07.304 }' 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.304 18:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.870 18:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:08.128 [2024-07-25 18:42:08.674463] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.128 [2024-07-25 18:42:08.674717] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:15:08.128 18:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:08.386 [2024-07-25 18:42:08.926570] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.386 [2024-07-25 18:42:08.926773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.386 [2024-07-25 18:42:08.926854] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.386 [2024-07-25 18:42:08.926911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.386 18:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.951 [2024-07-25 18:42:09.226754] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.951 BaseBdev1 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.951 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:09.210 [ 00:15:09.210 { 00:15:09.210 "name": "BaseBdev1", 00:15:09.210 "aliases": [ 00:15:09.210 "2e3706d6-27e3-4345-8610-b711cfeddca9" 00:15:09.210 ], 00:15:09.210 "product_name": "Malloc disk", 00:15:09.210 "block_size": 512, 00:15:09.210 "num_blocks": 65536, 00:15:09.210 "uuid": "2e3706d6-27e3-4345-8610-b711cfeddca9", 00:15:09.210 "assigned_rate_limits": { 00:15:09.210 "rw_ios_per_sec": 0, 00:15:09.210 "rw_mbytes_per_sec": 0, 00:15:09.210 "r_mbytes_per_sec": 0, 00:15:09.210 "w_mbytes_per_sec": 0 00:15:09.210 }, 00:15:09.210 "claimed": true, 00:15:09.210 "claim_type": "exclusive_write", 00:15:09.210 "zoned": false, 00:15:09.210 "supported_io_types": { 00:15:09.210 "read": true, 00:15:09.210 "write": true, 00:15:09.210 "unmap": true, 00:15:09.210 "flush": true, 00:15:09.210 "reset": true, 00:15:09.210 "nvme_admin": false, 00:15:09.210 "nvme_io": false, 00:15:09.210 "nvme_io_md": false, 00:15:09.210 "write_zeroes": true, 00:15:09.210 "zcopy": true, 00:15:09.210 "get_zone_info": false, 00:15:09.210 "zone_management": false, 00:15:09.210 "zone_append": false, 00:15:09.210 "compare": false, 00:15:09.210 "compare_and_write": false, 00:15:09.210 "abort": true, 00:15:09.210 "seek_hole": false, 00:15:09.210 "seek_data": false, 00:15:09.210 "copy": true, 00:15:09.210 "nvme_iov_md": false 00:15:09.210 }, 00:15:09.210 "memory_domains": [ 00:15:09.210 { 00:15:09.210 "dma_device_id": "system", 00:15:09.210 "dma_device_type": 1 00:15:09.210 }, 00:15:09.210 { 00:15:09.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.210 "dma_device_type": 2 00:15:09.210 } 00:15:09.210 ], 00:15:09.210 "driver_specific": {} 00:15:09.210 } 00:15:09.210 ] 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.210 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.468 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.468 "name": "Existed_Raid", 00:15:09.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.468 "strip_size_kb": 64, 00:15:09.468 "state": "configuring", 00:15:09.468 "raid_level": "raid0", 00:15:09.468 "superblock": false, 00:15:09.468 "num_base_bdevs": 2, 00:15:09.468 "num_base_bdevs_discovered": 1, 00:15:09.468 "num_base_bdevs_operational": 2, 00:15:09.468 "base_bdevs_list": [ 00:15:09.468 { 00:15:09.468 "name": "BaseBdev1", 00:15:09.468 "uuid": "2e3706d6-27e3-4345-8610-b711cfeddca9", 00:15:09.468 "is_configured": true, 00:15:09.468 "data_offset": 0, 00:15:09.468 "data_size": 65536 00:15:09.468 }, 00:15:09.468 { 00:15:09.468 "name": "BaseBdev2", 00:15:09.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.468 "is_configured": false, 00:15:09.468 "data_offset": 0, 00:15:09.468 "data_size": 0 00:15:09.468 } 00:15:09.468 ] 00:15:09.468 }' 00:15:09.468 18:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.468 18:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.036 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:10.036 [2024-07-25 18:42:10.591143] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.036 [2024-07-25 18:42:10.591383] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.294 [2024-07-25 18:42:10.843216] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.294 [2024-07-25 18:42:10.845656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.294 [2024-07-25 18:42:10.845876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.294 18:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.552 18:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.552 "name": "Existed_Raid", 00:15:10.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.552 "strip_size_kb": 64, 00:15:10.552 "state": "configuring", 00:15:10.552 "raid_level": "raid0", 00:15:10.552 "superblock": false, 00:15:10.552 "num_base_bdevs": 2, 00:15:10.552 "num_base_bdevs_discovered": 1, 00:15:10.552 "num_base_bdevs_operational": 2, 00:15:10.552 "base_bdevs_list": [ 00:15:10.553 { 00:15:10.553 "name": "BaseBdev1", 00:15:10.553 "uuid": "2e3706d6-27e3-4345-8610-b711cfeddca9", 00:15:10.553 "is_configured": true, 00:15:10.553 "data_offset": 0, 00:15:10.553 "data_size": 65536 00:15:10.553 }, 00:15:10.553 { 00:15:10.553 "name": "BaseBdev2", 00:15:10.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.553 "is_configured": false, 00:15:10.553 "data_offset": 0, 00:15:10.553 "data_size": 0 00:15:10.553 } 00:15:10.553 ] 00:15:10.553 }' 00:15:10.553 18:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.553 18:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.120 18:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.378 [2024-07-25 18:42:11.815739] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.378 [2024-07-25 18:42:11.816083] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:15:11.378 [2024-07-25 18:42:11.816124] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:11.378 [2024-07-25 18:42:11.816366] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:11.378 [2024-07-25 18:42:11.816799] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:15:11.378 [2024-07-25 18:42:11.816911] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:15:11.378 [2024-07-25 18:42:11.817278] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.378 BaseBdev2 00:15:11.378 18:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:11.378 18:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:11.378 18:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:11.378 18:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:11.378 18:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:11.378 18:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:11.378 18:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.636 18:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.894 [ 00:15:11.894 { 00:15:11.894 "name": "BaseBdev2", 00:15:11.894 "aliases": [ 00:15:11.894 "fb1cc29b-3c59-4126-99ca-04f76caac631" 00:15:11.894 ], 00:15:11.894 "product_name": "Malloc disk", 00:15:11.894 "block_size": 512, 00:15:11.894 "num_blocks": 65536, 00:15:11.894 "uuid": "fb1cc29b-3c59-4126-99ca-04f76caac631", 00:15:11.894 "assigned_rate_limits": { 00:15:11.894 "rw_ios_per_sec": 0, 00:15:11.894 "rw_mbytes_per_sec": 0, 00:15:11.894 "r_mbytes_per_sec": 0, 00:15:11.894 "w_mbytes_per_sec": 0 00:15:11.894 }, 00:15:11.894 "claimed": true, 00:15:11.894 "claim_type": "exclusive_write", 00:15:11.894 "zoned": false, 00:15:11.894 "supported_io_types": { 00:15:11.894 "read": true, 00:15:11.894 "write": true, 00:15:11.894 "unmap": true, 00:15:11.894 "flush": true, 00:15:11.894 "reset": true, 00:15:11.894 "nvme_admin": false, 00:15:11.894 "nvme_io": false, 00:15:11.894 "nvme_io_md": false, 00:15:11.894 "write_zeroes": true, 00:15:11.894 "zcopy": true, 00:15:11.894 "get_zone_info": false, 00:15:11.894 "zone_management": false, 00:15:11.894 "zone_append": false, 00:15:11.894 "compare": false, 00:15:11.894 "compare_and_write": false, 00:15:11.894 "abort": true, 00:15:11.894 "seek_hole": false, 00:15:11.894 "seek_data": false, 00:15:11.894 "copy": true, 00:15:11.894 "nvme_iov_md": false 00:15:11.894 }, 00:15:11.894 "memory_domains": [ 00:15:11.894 { 00:15:11.894 "dma_device_id": "system", 00:15:11.894 "dma_device_type": 1 00:15:11.894 }, 00:15:11.894 { 00:15:11.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.894 "dma_device_type": 2 00:15:11.894 } 00:15:11.894 ], 00:15:11.894 "driver_specific": {} 00:15:11.894 } 00:15:11.894 ] 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.894 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.153 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.153 "name": "Existed_Raid", 00:15:12.153 "uuid": "5bd86c1a-2463-416d-a242-4e9bad967531", 00:15:12.153 "strip_size_kb": 64, 00:15:12.153 "state": "online", 00:15:12.153 "raid_level": "raid0", 00:15:12.153 "superblock": false, 00:15:12.153 "num_base_bdevs": 2, 00:15:12.153 "num_base_bdevs_discovered": 2, 00:15:12.153 "num_base_bdevs_operational": 2, 00:15:12.153 "base_bdevs_list": [ 00:15:12.153 { 00:15:12.153 "name": "BaseBdev1", 00:15:12.153 "uuid": "2e3706d6-27e3-4345-8610-b711cfeddca9", 00:15:12.153 "is_configured": true, 00:15:12.153 "data_offset": 0, 00:15:12.153 "data_size": 65536 00:15:12.153 }, 00:15:12.153 { 00:15:12.153 "name": "BaseBdev2", 00:15:12.153 "uuid": "fb1cc29b-3c59-4126-99ca-04f76caac631", 00:15:12.153 "is_configured": true, 00:15:12.153 "data_offset": 0, 00:15:12.153 "data_size": 65536 00:15:12.153 } 00:15:12.153 ] 00:15:12.153 }' 00:15:12.153 18:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.153 18:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:12.721 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:12.987 [2024-07-25 18:42:13.392368] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.987 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:12.987 "name": "Existed_Raid", 00:15:12.987 "aliases": [ 00:15:12.987 "5bd86c1a-2463-416d-a242-4e9bad967531" 00:15:12.987 ], 00:15:12.987 "product_name": "Raid Volume", 00:15:12.987 "block_size": 512, 00:15:12.987 "num_blocks": 131072, 00:15:12.987 "uuid": "5bd86c1a-2463-416d-a242-4e9bad967531", 00:15:12.987 "assigned_rate_limits": { 00:15:12.987 "rw_ios_per_sec": 0, 00:15:12.987 "rw_mbytes_per_sec": 0, 00:15:12.987 "r_mbytes_per_sec": 0, 00:15:12.987 "w_mbytes_per_sec": 0 00:15:12.987 }, 00:15:12.987 "claimed": false, 00:15:12.987 "zoned": false, 00:15:12.987 "supported_io_types": { 00:15:12.987 "read": true, 00:15:12.987 "write": true, 00:15:12.987 "unmap": true, 00:15:12.987 "flush": true, 00:15:12.987 "reset": true, 00:15:12.987 "nvme_admin": false, 00:15:12.987 "nvme_io": false, 00:15:12.987 "nvme_io_md": false, 00:15:12.987 "write_zeroes": true, 00:15:12.987 "zcopy": false, 00:15:12.987 "get_zone_info": false, 00:15:12.987 "zone_management": false, 00:15:12.987 "zone_append": false, 00:15:12.987 "compare": false, 00:15:12.987 "compare_and_write": false, 00:15:12.987 "abort": false, 00:15:12.987 "seek_hole": false, 00:15:12.987 "seek_data": false, 00:15:12.987 "copy": false, 00:15:12.987 "nvme_iov_md": false 00:15:12.987 }, 00:15:12.987 "memory_domains": [ 00:15:12.987 { 00:15:12.987 "dma_device_id": "system", 00:15:12.987 "dma_device_type": 1 00:15:12.988 }, 00:15:12.988 { 00:15:12.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.988 "dma_device_type": 2 00:15:12.988 }, 00:15:12.988 { 00:15:12.988 "dma_device_id": "system", 00:15:12.988 "dma_device_type": 1 00:15:12.988 }, 00:15:12.988 { 00:15:12.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.988 "dma_device_type": 2 00:15:12.988 } 00:15:12.988 ], 00:15:12.988 "driver_specific": { 00:15:12.988 "raid": { 00:15:12.988 "uuid": "5bd86c1a-2463-416d-a242-4e9bad967531", 00:15:12.988 "strip_size_kb": 64, 00:15:12.988 "state": "online", 00:15:12.988 "raid_level": "raid0", 00:15:12.988 "superblock": false, 00:15:12.988 "num_base_bdevs": 2, 00:15:12.988 "num_base_bdevs_discovered": 2, 00:15:12.988 "num_base_bdevs_operational": 2, 00:15:12.988 "base_bdevs_list": [ 00:15:12.988 { 00:15:12.988 "name": "BaseBdev1", 00:15:12.988 "uuid": "2e3706d6-27e3-4345-8610-b711cfeddca9", 00:15:12.988 "is_configured": true, 00:15:12.988 "data_offset": 0, 00:15:12.988 "data_size": 65536 00:15:12.988 }, 00:15:12.988 { 00:15:12.988 "name": "BaseBdev2", 00:15:12.988 "uuid": "fb1cc29b-3c59-4126-99ca-04f76caac631", 00:15:12.988 "is_configured": true, 00:15:12.989 "data_offset": 0, 00:15:12.989 "data_size": 65536 00:15:12.989 } 00:15:12.989 ] 00:15:12.989 } 00:15:12.989 } 00:15:12.989 }' 00:15:12.989 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.989 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:12.989 BaseBdev2' 00:15:12.989 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.989 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:12.989 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.256 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.256 "name": "BaseBdev1", 00:15:13.256 "aliases": [ 00:15:13.256 "2e3706d6-27e3-4345-8610-b711cfeddca9" 00:15:13.256 ], 00:15:13.256 "product_name": "Malloc disk", 00:15:13.256 "block_size": 512, 00:15:13.256 "num_blocks": 65536, 00:15:13.256 "uuid": "2e3706d6-27e3-4345-8610-b711cfeddca9", 00:15:13.256 "assigned_rate_limits": { 00:15:13.256 "rw_ios_per_sec": 0, 00:15:13.256 "rw_mbytes_per_sec": 0, 00:15:13.256 "r_mbytes_per_sec": 0, 00:15:13.256 "w_mbytes_per_sec": 0 00:15:13.256 }, 00:15:13.256 "claimed": true, 00:15:13.256 "claim_type": "exclusive_write", 00:15:13.256 "zoned": false, 00:15:13.256 "supported_io_types": { 00:15:13.256 "read": true, 00:15:13.256 "write": true, 00:15:13.256 "unmap": true, 00:15:13.256 "flush": true, 00:15:13.256 "reset": true, 00:15:13.256 "nvme_admin": false, 00:15:13.256 "nvme_io": false, 00:15:13.256 "nvme_io_md": false, 00:15:13.256 "write_zeroes": true, 00:15:13.256 "zcopy": true, 00:15:13.256 "get_zone_info": false, 00:15:13.256 "zone_management": false, 00:15:13.256 "zone_append": false, 00:15:13.256 "compare": false, 00:15:13.256 "compare_and_write": false, 00:15:13.256 "abort": true, 00:15:13.256 "seek_hole": false, 00:15:13.256 "seek_data": false, 00:15:13.256 "copy": true, 00:15:13.256 "nvme_iov_md": false 00:15:13.256 }, 00:15:13.256 "memory_domains": [ 00:15:13.256 { 00:15:13.256 "dma_device_id": "system", 00:15:13.256 "dma_device_type": 1 00:15:13.256 }, 00:15:13.256 { 00:15:13.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.256 "dma_device_type": 2 00:15:13.256 } 00:15:13.256 ], 00:15:13.256 "driver_specific": {} 00:15:13.256 }' 00:15:13.256 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.256 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.256 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.256 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.515 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.515 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:13.515 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.515 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.515 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.516 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.516 18:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.516 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:13.516 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.516 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.516 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:13.775 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.775 "name": "BaseBdev2", 00:15:13.775 "aliases": [ 00:15:13.775 "fb1cc29b-3c59-4126-99ca-04f76caac631" 00:15:13.775 ], 00:15:13.775 "product_name": "Malloc disk", 00:15:13.775 "block_size": 512, 00:15:13.775 "num_blocks": 65536, 00:15:13.775 "uuid": "fb1cc29b-3c59-4126-99ca-04f76caac631", 00:15:13.775 "assigned_rate_limits": { 00:15:13.775 "rw_ios_per_sec": 0, 00:15:13.775 "rw_mbytes_per_sec": 0, 00:15:13.775 "r_mbytes_per_sec": 0, 00:15:13.775 "w_mbytes_per_sec": 0 00:15:13.775 }, 00:15:13.775 "claimed": true, 00:15:13.775 "claim_type": "exclusive_write", 00:15:13.775 "zoned": false, 00:15:13.775 "supported_io_types": { 00:15:13.775 "read": true, 00:15:13.775 "write": true, 00:15:13.775 "unmap": true, 00:15:13.775 "flush": true, 00:15:13.775 "reset": true, 00:15:13.775 "nvme_admin": false, 00:15:13.775 "nvme_io": false, 00:15:13.775 "nvme_io_md": false, 00:15:13.775 "write_zeroes": true, 00:15:13.775 "zcopy": true, 00:15:13.775 "get_zone_info": false, 00:15:13.775 "zone_management": false, 00:15:13.775 "zone_append": false, 00:15:13.775 "compare": false, 00:15:13.775 "compare_and_write": false, 00:15:13.775 "abort": true, 00:15:13.775 "seek_hole": false, 00:15:13.775 "seek_data": false, 00:15:13.775 "copy": true, 00:15:13.775 "nvme_iov_md": false 00:15:13.775 }, 00:15:13.775 "memory_domains": [ 00:15:13.775 { 00:15:13.775 "dma_device_id": "system", 00:15:13.775 "dma_device_type": 1 00:15:13.775 }, 00:15:13.775 { 00:15:13.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.775 "dma_device_type": 2 00:15:13.775 } 00:15:13.775 ], 00:15:13.775 "driver_specific": {} 00:15:13.775 }' 00:15:13.775 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.775 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.035 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:14.294 [2024-07-25 18:42:14.844478] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.294 [2024-07-25 18:42:14.844790] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.294 [2024-07-25 18:42:14.844989] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.554 18:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.814 18:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.814 "name": "Existed_Raid", 00:15:14.814 "uuid": "5bd86c1a-2463-416d-a242-4e9bad967531", 00:15:14.814 "strip_size_kb": 64, 00:15:14.814 "state": "offline", 00:15:14.814 "raid_level": "raid0", 00:15:14.814 "superblock": false, 00:15:14.814 "num_base_bdevs": 2, 00:15:14.814 "num_base_bdevs_discovered": 1, 00:15:14.814 "num_base_bdevs_operational": 1, 00:15:14.814 "base_bdevs_list": [ 00:15:14.814 { 00:15:14.814 "name": null, 00:15:14.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.814 "is_configured": false, 00:15:14.814 "data_offset": 0, 00:15:14.814 "data_size": 65536 00:15:14.814 }, 00:15:14.814 { 00:15:14.814 "name": "BaseBdev2", 00:15:14.814 "uuid": "fb1cc29b-3c59-4126-99ca-04f76caac631", 00:15:14.814 "is_configured": true, 00:15:14.814 "data_offset": 0, 00:15:14.814 "data_size": 65536 00:15:14.814 } 00:15:14.814 ] 00:15:14.814 }' 00:15:14.814 18:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.814 18:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.384 18:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:15.384 18:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.384 18:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.384 18:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:15.643 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:15.643 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.644 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:15.903 [2024-07-25 18:42:16.361380] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.903 [2024-07-25 18:42:16.361719] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:15:15.903 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:15.903 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.903 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.903 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 120063 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 120063 ']' 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 120063 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:16.162 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.421 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120063 00:15:16.421 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.421 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.421 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120063' 00:15:16.421 killing process with pid 120063 00:15:16.421 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 120063 00:15:16.422 [2024-07-25 18:42:16.755366] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.422 18:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 120063 00:15:16.422 [2024-07-25 18:42:16.755654] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.799 ************************************ 00:15:17.799 END TEST raid_state_function_test 00:15:17.799 ************************************ 00:15:17.799 18:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:17.799 00:15:17.799 real 0m11.516s 00:15:17.799 user 0m19.479s 00:15:17.799 sys 0m1.941s 00:15:17.799 18:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.799 18:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.799 18:42:18 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:17.799 18:42:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:17.799 18:42:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.799 18:42:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.799 ************************************ 00:15:17.799 START TEST raid_state_function_test_sb 00:15:17.799 ************************************ 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=120439 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120439' 00:15:17.799 Process raid pid: 120439 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 120439 /var/tmp/spdk-raid.sock 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 120439 ']' 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.799 18:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.799 [2024-07-25 18:42:18.141613] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:17.799 [2024-07-25 18:42:18.142160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.799 [2024-07-25 18:42:18.329079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.058 [2024-07-25 18:42:18.559685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.316 [2024-07-25 18:42:18.754356] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.575 18:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.575 18:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:18.575 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:18.834 [2024-07-25 18:42:19.301623] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.834 [2024-07-25 18:42:19.301977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.834 [2024-07-25 18:42:19.302070] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.834 [2024-07-25 18:42:19.302181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.834 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.143 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.143 "name": "Existed_Raid", 00:15:19.143 "uuid": "be8250a6-2477-4845-86f8-a5c3d483737d", 00:15:19.143 "strip_size_kb": 64, 00:15:19.143 "state": "configuring", 00:15:19.143 "raid_level": "raid0", 00:15:19.143 "superblock": true, 00:15:19.143 "num_base_bdevs": 2, 00:15:19.143 "num_base_bdevs_discovered": 0, 00:15:19.143 "num_base_bdevs_operational": 2, 00:15:19.143 "base_bdevs_list": [ 00:15:19.143 { 00:15:19.143 "name": "BaseBdev1", 00:15:19.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.143 "is_configured": false, 00:15:19.143 "data_offset": 0, 00:15:19.143 "data_size": 0 00:15:19.143 }, 00:15:19.143 { 00:15:19.143 "name": "BaseBdev2", 00:15:19.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.143 "is_configured": false, 00:15:19.143 "data_offset": 0, 00:15:19.143 "data_size": 0 00:15:19.143 } 00:15:19.143 ] 00:15:19.143 }' 00:15:19.143 18:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.143 18:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 18:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.992 [2024-07-25 18:42:20.401668] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.992 [2024-07-25 18:42:20.401989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:15:19.992 18:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:20.251 [2024-07-25 18:42:20.657767] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.251 [2024-07-25 18:42:20.658115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.251 [2024-07-25 18:42:20.658199] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.251 [2024-07-25 18:42:20.658259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.251 18:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.510 [2024-07-25 18:42:20.890307] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.510 BaseBdev1 00:15:20.510 18:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:20.510 18:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:20.510 18:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:20.510 18:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:20.510 18:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:20.510 18:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:20.510 18:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.769 18:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.028 [ 00:15:21.028 { 00:15:21.028 "name": "BaseBdev1", 00:15:21.028 "aliases": [ 00:15:21.028 "3d16decd-572a-4251-8558-3c0f84f9bd77" 00:15:21.028 ], 00:15:21.028 "product_name": "Malloc disk", 00:15:21.028 "block_size": 512, 00:15:21.028 "num_blocks": 65536, 00:15:21.028 "uuid": "3d16decd-572a-4251-8558-3c0f84f9bd77", 00:15:21.028 "assigned_rate_limits": { 00:15:21.028 "rw_ios_per_sec": 0, 00:15:21.028 "rw_mbytes_per_sec": 0, 00:15:21.028 "r_mbytes_per_sec": 0, 00:15:21.028 "w_mbytes_per_sec": 0 00:15:21.028 }, 00:15:21.028 "claimed": true, 00:15:21.028 "claim_type": "exclusive_write", 00:15:21.028 "zoned": false, 00:15:21.028 "supported_io_types": { 00:15:21.028 "read": true, 00:15:21.028 "write": true, 00:15:21.028 "unmap": true, 00:15:21.028 "flush": true, 00:15:21.028 "reset": true, 00:15:21.028 "nvme_admin": false, 00:15:21.028 "nvme_io": false, 00:15:21.028 "nvme_io_md": false, 00:15:21.028 "write_zeroes": true, 00:15:21.028 "zcopy": true, 00:15:21.028 "get_zone_info": false, 00:15:21.028 "zone_management": false, 00:15:21.028 "zone_append": false, 00:15:21.028 "compare": false, 00:15:21.028 "compare_and_write": false, 00:15:21.028 "abort": true, 00:15:21.028 "seek_hole": false, 00:15:21.028 "seek_data": false, 00:15:21.028 "copy": true, 00:15:21.028 "nvme_iov_md": false 00:15:21.028 }, 00:15:21.028 "memory_domains": [ 00:15:21.028 { 00:15:21.028 "dma_device_id": "system", 00:15:21.028 "dma_device_type": 1 00:15:21.028 }, 00:15:21.028 { 00:15:21.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.028 "dma_device_type": 2 00:15:21.028 } 00:15:21.028 ], 00:15:21.028 "driver_specific": {} 00:15:21.028 } 00:15:21.028 ] 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:21.028 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.029 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.029 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.029 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.029 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.029 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.288 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.288 "name": "Existed_Raid", 00:15:21.288 "uuid": "e1674dc1-ddef-42e1-9bfa-9e3aca4ac483", 00:15:21.288 "strip_size_kb": 64, 00:15:21.288 "state": "configuring", 00:15:21.288 "raid_level": "raid0", 00:15:21.288 "superblock": true, 00:15:21.288 "num_base_bdevs": 2, 00:15:21.288 "num_base_bdevs_discovered": 1, 00:15:21.288 "num_base_bdevs_operational": 2, 00:15:21.288 "base_bdevs_list": [ 00:15:21.288 { 00:15:21.288 "name": "BaseBdev1", 00:15:21.288 "uuid": "3d16decd-572a-4251-8558-3c0f84f9bd77", 00:15:21.288 "is_configured": true, 00:15:21.288 "data_offset": 2048, 00:15:21.288 "data_size": 63488 00:15:21.288 }, 00:15:21.288 { 00:15:21.288 "name": "BaseBdev2", 00:15:21.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.288 "is_configured": false, 00:15:21.288 "data_offset": 0, 00:15:21.288 "data_size": 0 00:15:21.288 } 00:15:21.288 ] 00:15:21.288 }' 00:15:21.288 18:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.288 18:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.857 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.116 [2024-07-25 18:42:22.446672] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.116 [2024-07-25 18:42:22.447015] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:15:22.116 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.374 [2024-07-25 18:42:22.710772] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.374 [2024-07-25 18:42:22.713305] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.374 [2024-07-25 18:42:22.713501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.374 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:22.374 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:22.374 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:22.374 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.374 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.374 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:22.374 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.375 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:22.375 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.375 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.375 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.375 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.375 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.375 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.633 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.633 "name": "Existed_Raid", 00:15:22.633 "uuid": "ea4a0a14-ef02-44d4-bf28-1bb17eba1b16", 00:15:22.633 "strip_size_kb": 64, 00:15:22.633 "state": "configuring", 00:15:22.633 "raid_level": "raid0", 00:15:22.633 "superblock": true, 00:15:22.633 "num_base_bdevs": 2, 00:15:22.633 "num_base_bdevs_discovered": 1, 00:15:22.633 "num_base_bdevs_operational": 2, 00:15:22.633 "base_bdevs_list": [ 00:15:22.633 { 00:15:22.633 "name": "BaseBdev1", 00:15:22.633 "uuid": "3d16decd-572a-4251-8558-3c0f84f9bd77", 00:15:22.633 "is_configured": true, 00:15:22.633 "data_offset": 2048, 00:15:22.633 "data_size": 63488 00:15:22.633 }, 00:15:22.633 { 00:15:22.633 "name": "BaseBdev2", 00:15:22.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.633 "is_configured": false, 00:15:22.633 "data_offset": 0, 00:15:22.633 "data_size": 0 00:15:22.633 } 00:15:22.633 ] 00:15:22.633 }' 00:15:22.633 18:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.633 18:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.198 18:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.453 [2024-07-25 18:42:23.856917] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.453 [2024-07-25 18:42:23.857503] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:15:23.453 [2024-07-25 18:42:23.857626] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.453 [2024-07-25 18:42:23.857824] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:23.453 [2024-07-25 18:42:23.858181] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:15:23.453 [2024-07-25 18:42:23.858221] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:15:23.453 [2024-07-25 18:42:23.858451] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.453 BaseBdev2 00:15:23.453 18:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:23.453 18:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:23.453 18:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.453 18:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:23.453 18:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.453 18:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.453 18:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.710 18:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.967 [ 00:15:23.967 { 00:15:23.967 "name": "BaseBdev2", 00:15:23.967 "aliases": [ 00:15:23.967 "59e09621-af3b-4336-9d0e-4843bbb9666a" 00:15:23.967 ], 00:15:23.967 "product_name": "Malloc disk", 00:15:23.967 "block_size": 512, 00:15:23.967 "num_blocks": 65536, 00:15:23.967 "uuid": "59e09621-af3b-4336-9d0e-4843bbb9666a", 00:15:23.967 "assigned_rate_limits": { 00:15:23.967 "rw_ios_per_sec": 0, 00:15:23.967 "rw_mbytes_per_sec": 0, 00:15:23.967 "r_mbytes_per_sec": 0, 00:15:23.967 "w_mbytes_per_sec": 0 00:15:23.967 }, 00:15:23.967 "claimed": true, 00:15:23.967 "claim_type": "exclusive_write", 00:15:23.967 "zoned": false, 00:15:23.967 "supported_io_types": { 00:15:23.967 "read": true, 00:15:23.967 "write": true, 00:15:23.967 "unmap": true, 00:15:23.967 "flush": true, 00:15:23.967 "reset": true, 00:15:23.967 "nvme_admin": false, 00:15:23.967 "nvme_io": false, 00:15:23.967 "nvme_io_md": false, 00:15:23.967 "write_zeroes": true, 00:15:23.967 "zcopy": true, 00:15:23.967 "get_zone_info": false, 00:15:23.967 "zone_management": false, 00:15:23.967 "zone_append": false, 00:15:23.967 "compare": false, 00:15:23.967 "compare_and_write": false, 00:15:23.967 "abort": true, 00:15:23.967 "seek_hole": false, 00:15:23.967 "seek_data": false, 00:15:23.967 "copy": true, 00:15:23.967 "nvme_iov_md": false 00:15:23.967 }, 00:15:23.967 "memory_domains": [ 00:15:23.967 { 00:15:23.967 "dma_device_id": "system", 00:15:23.967 "dma_device_type": 1 00:15:23.967 }, 00:15:23.967 { 00:15:23.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.967 "dma_device_type": 2 00:15:23.967 } 00:15:23.967 ], 00:15:23.967 "driver_specific": {} 00:15:23.967 } 00:15:23.967 ] 00:15:23.967 18:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.968 "name": "Existed_Raid", 00:15:23.968 "uuid": "ea4a0a14-ef02-44d4-bf28-1bb17eba1b16", 00:15:23.968 "strip_size_kb": 64, 00:15:23.968 "state": "online", 00:15:23.968 "raid_level": "raid0", 00:15:23.968 "superblock": true, 00:15:23.968 "num_base_bdevs": 2, 00:15:23.968 "num_base_bdevs_discovered": 2, 00:15:23.968 "num_base_bdevs_operational": 2, 00:15:23.968 "base_bdevs_list": [ 00:15:23.968 { 00:15:23.968 "name": "BaseBdev1", 00:15:23.968 "uuid": "3d16decd-572a-4251-8558-3c0f84f9bd77", 00:15:23.968 "is_configured": true, 00:15:23.968 "data_offset": 2048, 00:15:23.968 "data_size": 63488 00:15:23.968 }, 00:15:23.968 { 00:15:23.968 "name": "BaseBdev2", 00:15:23.968 "uuid": "59e09621-af3b-4336-9d0e-4843bbb9666a", 00:15:23.968 "is_configured": true, 00:15:23.968 "data_offset": 2048, 00:15:23.968 "data_size": 63488 00:15:23.968 } 00:15:23.968 ] 00:15:23.968 }' 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.968 18:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:24.532 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:24.790 [2024-07-25 18:42:25.261448] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.790 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:24.790 "name": "Existed_Raid", 00:15:24.790 "aliases": [ 00:15:24.790 "ea4a0a14-ef02-44d4-bf28-1bb17eba1b16" 00:15:24.790 ], 00:15:24.790 "product_name": "Raid Volume", 00:15:24.790 "block_size": 512, 00:15:24.790 "num_blocks": 126976, 00:15:24.790 "uuid": "ea4a0a14-ef02-44d4-bf28-1bb17eba1b16", 00:15:24.790 "assigned_rate_limits": { 00:15:24.790 "rw_ios_per_sec": 0, 00:15:24.790 "rw_mbytes_per_sec": 0, 00:15:24.790 "r_mbytes_per_sec": 0, 00:15:24.790 "w_mbytes_per_sec": 0 00:15:24.790 }, 00:15:24.790 "claimed": false, 00:15:24.790 "zoned": false, 00:15:24.790 "supported_io_types": { 00:15:24.790 "read": true, 00:15:24.790 "write": true, 00:15:24.790 "unmap": true, 00:15:24.790 "flush": true, 00:15:24.790 "reset": true, 00:15:24.790 "nvme_admin": false, 00:15:24.790 "nvme_io": false, 00:15:24.790 "nvme_io_md": false, 00:15:24.790 "write_zeroes": true, 00:15:24.790 "zcopy": false, 00:15:24.790 "get_zone_info": false, 00:15:24.790 "zone_management": false, 00:15:24.790 "zone_append": false, 00:15:24.790 "compare": false, 00:15:24.790 "compare_and_write": false, 00:15:24.790 "abort": false, 00:15:24.790 "seek_hole": false, 00:15:24.790 "seek_data": false, 00:15:24.790 "copy": false, 00:15:24.790 "nvme_iov_md": false 00:15:24.790 }, 00:15:24.790 "memory_domains": [ 00:15:24.790 { 00:15:24.790 "dma_device_id": "system", 00:15:24.790 "dma_device_type": 1 00:15:24.790 }, 00:15:24.790 { 00:15:24.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.790 "dma_device_type": 2 00:15:24.790 }, 00:15:24.790 { 00:15:24.790 "dma_device_id": "system", 00:15:24.790 "dma_device_type": 1 00:15:24.790 }, 00:15:24.790 { 00:15:24.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.790 "dma_device_type": 2 00:15:24.790 } 00:15:24.790 ], 00:15:24.790 "driver_specific": { 00:15:24.790 "raid": { 00:15:24.790 "uuid": "ea4a0a14-ef02-44d4-bf28-1bb17eba1b16", 00:15:24.790 "strip_size_kb": 64, 00:15:24.791 "state": "online", 00:15:24.791 "raid_level": "raid0", 00:15:24.791 "superblock": true, 00:15:24.791 "num_base_bdevs": 2, 00:15:24.791 "num_base_bdevs_discovered": 2, 00:15:24.791 "num_base_bdevs_operational": 2, 00:15:24.791 "base_bdevs_list": [ 00:15:24.791 { 00:15:24.791 "name": "BaseBdev1", 00:15:24.791 "uuid": "3d16decd-572a-4251-8558-3c0f84f9bd77", 00:15:24.791 "is_configured": true, 00:15:24.791 "data_offset": 2048, 00:15:24.791 "data_size": 63488 00:15:24.791 }, 00:15:24.791 { 00:15:24.791 "name": "BaseBdev2", 00:15:24.791 "uuid": "59e09621-af3b-4336-9d0e-4843bbb9666a", 00:15:24.791 "is_configured": true, 00:15:24.791 "data_offset": 2048, 00:15:24.791 "data_size": 63488 00:15:24.791 } 00:15:24.791 ] 00:15:24.791 } 00:15:24.791 } 00:15:24.791 }' 00:15:24.791 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:24.791 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:24.791 BaseBdev2' 00:15:24.791 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:24.791 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.791 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:25.049 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:25.049 "name": "BaseBdev1", 00:15:25.049 "aliases": [ 00:15:25.049 "3d16decd-572a-4251-8558-3c0f84f9bd77" 00:15:25.049 ], 00:15:25.049 "product_name": "Malloc disk", 00:15:25.049 "block_size": 512, 00:15:25.049 "num_blocks": 65536, 00:15:25.049 "uuid": "3d16decd-572a-4251-8558-3c0f84f9bd77", 00:15:25.049 "assigned_rate_limits": { 00:15:25.049 "rw_ios_per_sec": 0, 00:15:25.049 "rw_mbytes_per_sec": 0, 00:15:25.049 "r_mbytes_per_sec": 0, 00:15:25.049 "w_mbytes_per_sec": 0 00:15:25.049 }, 00:15:25.049 "claimed": true, 00:15:25.049 "claim_type": "exclusive_write", 00:15:25.049 "zoned": false, 00:15:25.049 "supported_io_types": { 00:15:25.049 "read": true, 00:15:25.049 "write": true, 00:15:25.049 "unmap": true, 00:15:25.049 "flush": true, 00:15:25.049 "reset": true, 00:15:25.049 "nvme_admin": false, 00:15:25.049 "nvme_io": false, 00:15:25.049 "nvme_io_md": false, 00:15:25.049 "write_zeroes": true, 00:15:25.049 "zcopy": true, 00:15:25.049 "get_zone_info": false, 00:15:25.049 "zone_management": false, 00:15:25.049 "zone_append": false, 00:15:25.049 "compare": false, 00:15:25.049 "compare_and_write": false, 00:15:25.049 "abort": true, 00:15:25.049 "seek_hole": false, 00:15:25.049 "seek_data": false, 00:15:25.049 "copy": true, 00:15:25.049 "nvme_iov_md": false 00:15:25.049 }, 00:15:25.049 "memory_domains": [ 00:15:25.049 { 00:15:25.049 "dma_device_id": "system", 00:15:25.049 "dma_device_type": 1 00:15:25.049 }, 00:15:25.049 { 00:15:25.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.049 "dma_device_type": 2 00:15:25.049 } 00:15:25.049 ], 00:15:25.049 "driver_specific": {} 00:15:25.049 }' 00:15:25.049 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.049 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.049 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:25.049 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:25.308 18:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:25.875 "name": "BaseBdev2", 00:15:25.875 "aliases": [ 00:15:25.875 "59e09621-af3b-4336-9d0e-4843bbb9666a" 00:15:25.875 ], 00:15:25.875 "product_name": "Malloc disk", 00:15:25.875 "block_size": 512, 00:15:25.875 "num_blocks": 65536, 00:15:25.875 "uuid": "59e09621-af3b-4336-9d0e-4843bbb9666a", 00:15:25.875 "assigned_rate_limits": { 00:15:25.875 "rw_ios_per_sec": 0, 00:15:25.875 "rw_mbytes_per_sec": 0, 00:15:25.875 "r_mbytes_per_sec": 0, 00:15:25.875 "w_mbytes_per_sec": 0 00:15:25.875 }, 00:15:25.875 "claimed": true, 00:15:25.875 "claim_type": "exclusive_write", 00:15:25.875 "zoned": false, 00:15:25.875 "supported_io_types": { 00:15:25.875 "read": true, 00:15:25.875 "write": true, 00:15:25.875 "unmap": true, 00:15:25.875 "flush": true, 00:15:25.875 "reset": true, 00:15:25.875 "nvme_admin": false, 00:15:25.875 "nvme_io": false, 00:15:25.875 "nvme_io_md": false, 00:15:25.875 "write_zeroes": true, 00:15:25.875 "zcopy": true, 00:15:25.875 "get_zone_info": false, 00:15:25.875 "zone_management": false, 00:15:25.875 "zone_append": false, 00:15:25.875 "compare": false, 00:15:25.875 "compare_and_write": false, 00:15:25.875 "abort": true, 00:15:25.875 "seek_hole": false, 00:15:25.875 "seek_data": false, 00:15:25.875 "copy": true, 00:15:25.875 "nvme_iov_md": false 00:15:25.875 }, 00:15:25.875 "memory_domains": [ 00:15:25.875 { 00:15:25.875 "dma_device_id": "system", 00:15:25.875 "dma_device_type": 1 00:15:25.875 }, 00:15:25.875 { 00:15:25.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.875 "dma_device_type": 2 00:15:25.875 } 00:15:25.875 ], 00:15:25.875 "driver_specific": {} 00:15:25.875 }' 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:25.875 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.133 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.133 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.133 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:26.391 [2024-07-25 18:42:26.761657] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.391 [2024-07-25 18:42:26.762003] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.391 [2024-07-25 18:42:26.762148] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.391 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.392 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.392 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.392 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.392 18:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.650 18:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.650 "name": "Existed_Raid", 00:15:26.650 "uuid": "ea4a0a14-ef02-44d4-bf28-1bb17eba1b16", 00:15:26.650 "strip_size_kb": 64, 00:15:26.650 "state": "offline", 00:15:26.650 "raid_level": "raid0", 00:15:26.650 "superblock": true, 00:15:26.650 "num_base_bdevs": 2, 00:15:26.650 "num_base_bdevs_discovered": 1, 00:15:26.650 "num_base_bdevs_operational": 1, 00:15:26.650 "base_bdevs_list": [ 00:15:26.650 { 00:15:26.650 "name": null, 00:15:26.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.650 "is_configured": false, 00:15:26.650 "data_offset": 2048, 00:15:26.650 "data_size": 63488 00:15:26.650 }, 00:15:26.650 { 00:15:26.650 "name": "BaseBdev2", 00:15:26.650 "uuid": "59e09621-af3b-4336-9d0e-4843bbb9666a", 00:15:26.650 "is_configured": true, 00:15:26.650 "data_offset": 2048, 00:15:26.650 "data_size": 63488 00:15:26.650 } 00:15:26.650 ] 00:15:26.650 }' 00:15:26.650 18:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.650 18:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.217 18:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:27.217 18:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:27.217 18:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.217 18:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:27.475 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:27.475 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.475 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:27.733 [2024-07-25 18:42:28.255636] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.733 [2024-07-25 18:42:28.255960] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:15:27.992 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:27.992 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 120439 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 120439 ']' 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 120439 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:27.993 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.251 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120439 00:15:28.251 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.251 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.251 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120439' 00:15:28.251 killing process with pid 120439 00:15:28.251 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 120439 00:15:28.251 [2024-07-25 18:42:28.591050] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.251 18:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 120439 00:15:28.251 [2024-07-25 18:42:28.591311] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.627 ************************************ 00:15:29.627 END TEST raid_state_function_test_sb 00:15:29.627 ************************************ 00:15:29.627 18:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:29.627 00:15:29.627 real 0m11.748s 00:15:29.627 user 0m19.945s 00:15:29.627 sys 0m1.969s 00:15:29.627 18:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.627 18:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.627 18:42:29 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:29.627 18:42:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:29.627 18:42:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.627 18:42:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:29.627 ************************************ 00:15:29.627 START TEST raid_superblock_test 00:15:29.627 ************************************ 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:29.627 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=120821 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 120821 /var/tmp/spdk-raid.sock 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 120821 ']' 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:29.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.628 18:42:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.628 [2024-07-25 18:42:29.955770] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:29.628 [2024-07-25 18:42:29.956239] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120821 ] 00:15:29.628 [2024-07-25 18:42:30.136155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.887 [2024-07-25 18:42:30.333281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.146 [2024-07-25 18:42:30.527029] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.406 18:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:30.665 malloc1 00:15:30.666 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:30.925 [2024-07-25 18:42:31.254855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:30.925 [2024-07-25 18:42:31.255146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.925 [2024-07-25 18:42:31.255228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:30.925 [2024-07-25 18:42:31.255326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.925 [2024-07-25 18:42:31.258154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.925 [2024-07-25 18:42:31.258314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:30.925 pt1 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.925 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:31.185 malloc2 00:15:31.185 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.185 [2024-07-25 18:42:31.687703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.185 [2024-07-25 18:42:31.688049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.185 [2024-07-25 18:42:31.688123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:31.185 [2024-07-25 18:42:31.688227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.185 [2024-07-25 18:42:31.690941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.185 [2024-07-25 18:42:31.691091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.185 pt2 00:15:31.185 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:31.185 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:31.185 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:31.444 [2024-07-25 18:42:31.867797] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.444 [2024-07-25 18:42:31.870255] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.444 [2024-07-25 18:42:31.870562] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:15:31.444 [2024-07-25 18:42:31.870662] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:31.444 [2024-07-25 18:42:31.870858] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:31.444 [2024-07-25 18:42:31.871340] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:15:31.444 [2024-07-25 18:42:31.871434] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:15:31.444 [2024-07-25 18:42:31.871739] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.444 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.445 18:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.704 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.704 "name": "raid_bdev1", 00:15:31.704 "uuid": "4c52082e-9a4b-4a3c-8111-81f441bb228f", 00:15:31.704 "strip_size_kb": 64, 00:15:31.704 "state": "online", 00:15:31.704 "raid_level": "raid0", 00:15:31.704 "superblock": true, 00:15:31.704 "num_base_bdevs": 2, 00:15:31.704 "num_base_bdevs_discovered": 2, 00:15:31.704 "num_base_bdevs_operational": 2, 00:15:31.704 "base_bdevs_list": [ 00:15:31.704 { 00:15:31.704 "name": "pt1", 00:15:31.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.704 "is_configured": true, 00:15:31.704 "data_offset": 2048, 00:15:31.704 "data_size": 63488 00:15:31.704 }, 00:15:31.704 { 00:15:31.704 "name": "pt2", 00:15:31.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.704 "is_configured": true, 00:15:31.704 "data_offset": 2048, 00:15:31.704 "data_size": 63488 00:15:31.704 } 00:15:31.704 ] 00:15:31.704 }' 00:15:31.704 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.704 18:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:32.273 [2024-07-25 18:42:32.780259] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:32.273 "name": "raid_bdev1", 00:15:32.273 "aliases": [ 00:15:32.273 "4c52082e-9a4b-4a3c-8111-81f441bb228f" 00:15:32.273 ], 00:15:32.273 "product_name": "Raid Volume", 00:15:32.273 "block_size": 512, 00:15:32.273 "num_blocks": 126976, 00:15:32.273 "uuid": "4c52082e-9a4b-4a3c-8111-81f441bb228f", 00:15:32.273 "assigned_rate_limits": { 00:15:32.273 "rw_ios_per_sec": 0, 00:15:32.273 "rw_mbytes_per_sec": 0, 00:15:32.273 "r_mbytes_per_sec": 0, 00:15:32.273 "w_mbytes_per_sec": 0 00:15:32.273 }, 00:15:32.273 "claimed": false, 00:15:32.273 "zoned": false, 00:15:32.273 "supported_io_types": { 00:15:32.273 "read": true, 00:15:32.273 "write": true, 00:15:32.273 "unmap": true, 00:15:32.273 "flush": true, 00:15:32.273 "reset": true, 00:15:32.273 "nvme_admin": false, 00:15:32.273 "nvme_io": false, 00:15:32.273 "nvme_io_md": false, 00:15:32.273 "write_zeroes": true, 00:15:32.273 "zcopy": false, 00:15:32.273 "get_zone_info": false, 00:15:32.273 "zone_management": false, 00:15:32.273 "zone_append": false, 00:15:32.273 "compare": false, 00:15:32.273 "compare_and_write": false, 00:15:32.273 "abort": false, 00:15:32.273 "seek_hole": false, 00:15:32.273 "seek_data": false, 00:15:32.273 "copy": false, 00:15:32.273 "nvme_iov_md": false 00:15:32.273 }, 00:15:32.273 "memory_domains": [ 00:15:32.273 { 00:15:32.273 "dma_device_id": "system", 00:15:32.273 "dma_device_type": 1 00:15:32.273 }, 00:15:32.273 { 00:15:32.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.273 "dma_device_type": 2 00:15:32.273 }, 00:15:32.273 { 00:15:32.273 "dma_device_id": "system", 00:15:32.273 "dma_device_type": 1 00:15:32.273 }, 00:15:32.273 { 00:15:32.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.273 "dma_device_type": 2 00:15:32.273 } 00:15:32.273 ], 00:15:32.273 "driver_specific": { 00:15:32.273 "raid": { 00:15:32.273 "uuid": "4c52082e-9a4b-4a3c-8111-81f441bb228f", 00:15:32.273 "strip_size_kb": 64, 00:15:32.273 "state": "online", 00:15:32.273 "raid_level": "raid0", 00:15:32.273 "superblock": true, 00:15:32.273 "num_base_bdevs": 2, 00:15:32.273 "num_base_bdevs_discovered": 2, 00:15:32.273 "num_base_bdevs_operational": 2, 00:15:32.273 "base_bdevs_list": [ 00:15:32.273 { 00:15:32.273 "name": "pt1", 00:15:32.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.273 "is_configured": true, 00:15:32.273 "data_offset": 2048, 00:15:32.273 "data_size": 63488 00:15:32.273 }, 00:15:32.273 { 00:15:32.273 "name": "pt2", 00:15:32.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.273 "is_configured": true, 00:15:32.273 "data_offset": 2048, 00:15:32.273 "data_size": 63488 00:15:32.273 } 00:15:32.273 ] 00:15:32.273 } 00:15:32.273 } 00:15:32.273 }' 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:32.273 pt2' 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:32.273 18:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.533 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.533 "name": "pt1", 00:15:32.533 "aliases": [ 00:15:32.533 "00000000-0000-0000-0000-000000000001" 00:15:32.533 ], 00:15:32.533 "product_name": "passthru", 00:15:32.533 "block_size": 512, 00:15:32.533 "num_blocks": 65536, 00:15:32.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.533 "assigned_rate_limits": { 00:15:32.533 "rw_ios_per_sec": 0, 00:15:32.533 "rw_mbytes_per_sec": 0, 00:15:32.533 "r_mbytes_per_sec": 0, 00:15:32.533 "w_mbytes_per_sec": 0 00:15:32.533 }, 00:15:32.533 "claimed": true, 00:15:32.533 "claim_type": "exclusive_write", 00:15:32.533 "zoned": false, 00:15:32.533 "supported_io_types": { 00:15:32.533 "read": true, 00:15:32.533 "write": true, 00:15:32.533 "unmap": true, 00:15:32.533 "flush": true, 00:15:32.533 "reset": true, 00:15:32.533 "nvme_admin": false, 00:15:32.533 "nvme_io": false, 00:15:32.533 "nvme_io_md": false, 00:15:32.533 "write_zeroes": true, 00:15:32.533 "zcopy": true, 00:15:32.533 "get_zone_info": false, 00:15:32.533 "zone_management": false, 00:15:32.533 "zone_append": false, 00:15:32.533 "compare": false, 00:15:32.533 "compare_and_write": false, 00:15:32.533 "abort": true, 00:15:32.533 "seek_hole": false, 00:15:32.533 "seek_data": false, 00:15:32.533 "copy": true, 00:15:32.533 "nvme_iov_md": false 00:15:32.533 }, 00:15:32.533 "memory_domains": [ 00:15:32.533 { 00:15:32.533 "dma_device_id": "system", 00:15:32.533 "dma_device_type": 1 00:15:32.533 }, 00:15:32.533 { 00:15:32.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.533 "dma_device_type": 2 00:15:32.533 } 00:15:32.533 ], 00:15:32.533 "driver_specific": { 00:15:32.533 "passthru": { 00:15:32.533 "name": "pt1", 00:15:32.533 "base_bdev_name": "malloc1" 00:15:32.533 } 00:15:32.533 } 00:15:32.533 }' 00:15:32.533 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.533 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.533 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.533 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:32.792 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.049 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.049 "name": "pt2", 00:15:33.049 "aliases": [ 00:15:33.049 "00000000-0000-0000-0000-000000000002" 00:15:33.049 ], 00:15:33.049 "product_name": "passthru", 00:15:33.049 "block_size": 512, 00:15:33.049 "num_blocks": 65536, 00:15:33.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.049 "assigned_rate_limits": { 00:15:33.049 "rw_ios_per_sec": 0, 00:15:33.049 "rw_mbytes_per_sec": 0, 00:15:33.049 "r_mbytes_per_sec": 0, 00:15:33.049 "w_mbytes_per_sec": 0 00:15:33.049 }, 00:15:33.049 "claimed": true, 00:15:33.049 "claim_type": "exclusive_write", 00:15:33.049 "zoned": false, 00:15:33.049 "supported_io_types": { 00:15:33.049 "read": true, 00:15:33.049 "write": true, 00:15:33.049 "unmap": true, 00:15:33.049 "flush": true, 00:15:33.049 "reset": true, 00:15:33.049 "nvme_admin": false, 00:15:33.049 "nvme_io": false, 00:15:33.049 "nvme_io_md": false, 00:15:33.049 "write_zeroes": true, 00:15:33.049 "zcopy": true, 00:15:33.049 "get_zone_info": false, 00:15:33.049 "zone_management": false, 00:15:33.049 "zone_append": false, 00:15:33.049 "compare": false, 00:15:33.049 "compare_and_write": false, 00:15:33.049 "abort": true, 00:15:33.049 "seek_hole": false, 00:15:33.049 "seek_data": false, 00:15:33.049 "copy": true, 00:15:33.049 "nvme_iov_md": false 00:15:33.049 }, 00:15:33.049 "memory_domains": [ 00:15:33.049 { 00:15:33.049 "dma_device_id": "system", 00:15:33.049 "dma_device_type": 1 00:15:33.049 }, 00:15:33.049 { 00:15:33.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.050 "dma_device_type": 2 00:15:33.050 } 00:15:33.050 ], 00:15:33.050 "driver_specific": { 00:15:33.050 "passthru": { 00:15:33.050 "name": "pt2", 00:15:33.050 "base_bdev_name": "malloc2" 00:15:33.050 } 00:15:33.050 } 00:15:33.050 }' 00:15:33.050 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.050 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.050 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.050 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:33.320 18:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:33.615 [2024-07-25 18:42:34.136417] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.615 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=4c52082e-9a4b-4a3c-8111-81f441bb228f 00:15:33.615 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 4c52082e-9a4b-4a3c-8111-81f441bb228f ']' 00:15:33.615 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:33.874 [2024-07-25 18:42:34.372221] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.874 [2024-07-25 18:42:34.372415] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.874 [2024-07-25 18:42:34.372695] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.874 [2024-07-25 18:42:34.372857] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.874 [2024-07-25 18:42:34.372940] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:15:33.874 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.874 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:34.132 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:34.132 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:34.132 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.132 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:34.390 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.390 18:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:34.648 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:34.648 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:34.907 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:34.907 [2024-07-25 18:42:35.468405] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:34.907 [2024-07-25 18:42:35.470948] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:34.907 [2024-07-25 18:42:35.471176] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:34.907 [2024-07-25 18:42:35.471380] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:34.907 [2024-07-25 18:42:35.471490] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.907 [2024-07-25 18:42:35.471526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:15:34.907 request: 00:15:34.907 { 00:15:34.907 "name": "raid_bdev1", 00:15:34.907 "raid_level": "raid0", 00:15:34.907 "base_bdevs": [ 00:15:34.907 "malloc1", 00:15:34.907 "malloc2" 00:15:34.907 ], 00:15:34.907 "strip_size_kb": 64, 00:15:34.907 "superblock": false, 00:15:34.907 "method": "bdev_raid_create", 00:15:34.907 "req_id": 1 00:15:34.907 } 00:15:34.907 Got JSON-RPC error response 00:15:34.907 response: 00:15:34.907 { 00:15:34.907 "code": -17, 00:15:34.907 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:34.907 } 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:35.166 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.425 [2024-07-25 18:42:35.876453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.425 [2024-07-25 18:42:35.876709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.425 [2024-07-25 18:42:35.876785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:35.425 [2024-07-25 18:42:35.876898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.425 [2024-07-25 18:42:35.879705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.425 [2024-07-25 18:42:35.879896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.425 [2024-07-25 18:42:35.880115] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.425 [2024-07-25 18:42:35.880286] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.425 pt1 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.425 18:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.683 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.683 "name": "raid_bdev1", 00:15:35.683 "uuid": "4c52082e-9a4b-4a3c-8111-81f441bb228f", 00:15:35.683 "strip_size_kb": 64, 00:15:35.683 "state": "configuring", 00:15:35.683 "raid_level": "raid0", 00:15:35.683 "superblock": true, 00:15:35.683 "num_base_bdevs": 2, 00:15:35.683 "num_base_bdevs_discovered": 1, 00:15:35.683 "num_base_bdevs_operational": 2, 00:15:35.683 "base_bdevs_list": [ 00:15:35.683 { 00:15:35.683 "name": "pt1", 00:15:35.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.683 "is_configured": true, 00:15:35.683 "data_offset": 2048, 00:15:35.683 "data_size": 63488 00:15:35.683 }, 00:15:35.683 { 00:15:35.683 "name": null, 00:15:35.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.683 "is_configured": false, 00:15:35.683 "data_offset": 2048, 00:15:35.683 "data_size": 63488 00:15:35.683 } 00:15:35.683 ] 00:15:35.683 }' 00:15:35.683 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.683 18:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.250 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:15:36.250 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:36.250 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:36.250 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.508 [2024-07-25 18:42:36.884425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.508 [2024-07-25 18:42:36.884699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.508 [2024-07-25 18:42:36.884768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:36.508 [2024-07-25 18:42:36.884876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.508 [2024-07-25 18:42:36.885437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.508 [2024-07-25 18:42:36.885605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.508 [2024-07-25 18:42:36.885869] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.508 [2024-07-25 18:42:36.885986] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.508 [2024-07-25 18:42:36.886178] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:15:36.508 [2024-07-25 18:42:36.886300] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.508 [2024-07-25 18:42:36.886433] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:36.508 [2024-07-25 18:42:36.886932] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:15:36.508 [2024-07-25 18:42:36.887042] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:15:36.508 [2024-07-25 18:42:36.887262] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.508 pt2 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.508 18:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.766 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.766 "name": "raid_bdev1", 00:15:36.766 "uuid": "4c52082e-9a4b-4a3c-8111-81f441bb228f", 00:15:36.766 "strip_size_kb": 64, 00:15:36.766 "state": "online", 00:15:36.766 "raid_level": "raid0", 00:15:36.766 "superblock": true, 00:15:36.766 "num_base_bdevs": 2, 00:15:36.766 "num_base_bdevs_discovered": 2, 00:15:36.766 "num_base_bdevs_operational": 2, 00:15:36.766 "base_bdevs_list": [ 00:15:36.766 { 00:15:36.766 "name": "pt1", 00:15:36.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.766 "is_configured": true, 00:15:36.766 "data_offset": 2048, 00:15:36.766 "data_size": 63488 00:15:36.766 }, 00:15:36.766 { 00:15:36.766 "name": "pt2", 00:15:36.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.766 "is_configured": true, 00:15:36.766 "data_offset": 2048, 00:15:36.766 "data_size": 63488 00:15:36.766 } 00:15:36.766 ] 00:15:36.766 }' 00:15:36.766 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.766 18:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:37.332 18:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:37.591 [2024-07-25 18:42:38.032846] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.591 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:37.591 "name": "raid_bdev1", 00:15:37.591 "aliases": [ 00:15:37.591 "4c52082e-9a4b-4a3c-8111-81f441bb228f" 00:15:37.591 ], 00:15:37.591 "product_name": "Raid Volume", 00:15:37.591 "block_size": 512, 00:15:37.591 "num_blocks": 126976, 00:15:37.591 "uuid": "4c52082e-9a4b-4a3c-8111-81f441bb228f", 00:15:37.591 "assigned_rate_limits": { 00:15:37.591 "rw_ios_per_sec": 0, 00:15:37.591 "rw_mbytes_per_sec": 0, 00:15:37.591 "r_mbytes_per_sec": 0, 00:15:37.591 "w_mbytes_per_sec": 0 00:15:37.591 }, 00:15:37.591 "claimed": false, 00:15:37.591 "zoned": false, 00:15:37.591 "supported_io_types": { 00:15:37.591 "read": true, 00:15:37.591 "write": true, 00:15:37.591 "unmap": true, 00:15:37.591 "flush": true, 00:15:37.591 "reset": true, 00:15:37.591 "nvme_admin": false, 00:15:37.591 "nvme_io": false, 00:15:37.591 "nvme_io_md": false, 00:15:37.591 "write_zeroes": true, 00:15:37.591 "zcopy": false, 00:15:37.591 "get_zone_info": false, 00:15:37.591 "zone_management": false, 00:15:37.591 "zone_append": false, 00:15:37.591 "compare": false, 00:15:37.591 "compare_and_write": false, 00:15:37.591 "abort": false, 00:15:37.591 "seek_hole": false, 00:15:37.591 "seek_data": false, 00:15:37.591 "copy": false, 00:15:37.591 "nvme_iov_md": false 00:15:37.591 }, 00:15:37.591 "memory_domains": [ 00:15:37.591 { 00:15:37.591 "dma_device_id": "system", 00:15:37.591 "dma_device_type": 1 00:15:37.591 }, 00:15:37.591 { 00:15:37.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.591 "dma_device_type": 2 00:15:37.591 }, 00:15:37.591 { 00:15:37.591 "dma_device_id": "system", 00:15:37.591 "dma_device_type": 1 00:15:37.591 }, 00:15:37.591 { 00:15:37.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.591 "dma_device_type": 2 00:15:37.591 } 00:15:37.591 ], 00:15:37.591 "driver_specific": { 00:15:37.591 "raid": { 00:15:37.591 "uuid": "4c52082e-9a4b-4a3c-8111-81f441bb228f", 00:15:37.591 "strip_size_kb": 64, 00:15:37.591 "state": "online", 00:15:37.591 "raid_level": "raid0", 00:15:37.591 "superblock": true, 00:15:37.591 "num_base_bdevs": 2, 00:15:37.591 "num_base_bdevs_discovered": 2, 00:15:37.591 "num_base_bdevs_operational": 2, 00:15:37.591 "base_bdevs_list": [ 00:15:37.591 { 00:15:37.591 "name": "pt1", 00:15:37.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.591 "is_configured": true, 00:15:37.591 "data_offset": 2048, 00:15:37.591 "data_size": 63488 00:15:37.591 }, 00:15:37.591 { 00:15:37.591 "name": "pt2", 00:15:37.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.591 "is_configured": true, 00:15:37.591 "data_offset": 2048, 00:15:37.591 "data_size": 63488 00:15:37.591 } 00:15:37.591 ] 00:15:37.591 } 00:15:37.591 } 00:15:37.591 }' 00:15:37.591 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.591 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:37.591 pt2' 00:15:37.591 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.591 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:37.591 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.850 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.850 "name": "pt1", 00:15:37.850 "aliases": [ 00:15:37.850 "00000000-0000-0000-0000-000000000001" 00:15:37.850 ], 00:15:37.850 "product_name": "passthru", 00:15:37.850 "block_size": 512, 00:15:37.850 "num_blocks": 65536, 00:15:37.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.850 "assigned_rate_limits": { 00:15:37.850 "rw_ios_per_sec": 0, 00:15:37.850 "rw_mbytes_per_sec": 0, 00:15:37.850 "r_mbytes_per_sec": 0, 00:15:37.850 "w_mbytes_per_sec": 0 00:15:37.850 }, 00:15:37.850 "claimed": true, 00:15:37.850 "claim_type": "exclusive_write", 00:15:37.850 "zoned": false, 00:15:37.850 "supported_io_types": { 00:15:37.850 "read": true, 00:15:37.850 "write": true, 00:15:37.850 "unmap": true, 00:15:37.850 "flush": true, 00:15:37.850 "reset": true, 00:15:37.850 "nvme_admin": false, 00:15:37.850 "nvme_io": false, 00:15:37.850 "nvme_io_md": false, 00:15:37.850 "write_zeroes": true, 00:15:37.851 "zcopy": true, 00:15:37.851 "get_zone_info": false, 00:15:37.851 "zone_management": false, 00:15:37.851 "zone_append": false, 00:15:37.851 "compare": false, 00:15:37.851 "compare_and_write": false, 00:15:37.851 "abort": true, 00:15:37.851 "seek_hole": false, 00:15:37.851 "seek_data": false, 00:15:37.851 "copy": true, 00:15:37.851 "nvme_iov_md": false 00:15:37.851 }, 00:15:37.851 "memory_domains": [ 00:15:37.851 { 00:15:37.851 "dma_device_id": "system", 00:15:37.851 "dma_device_type": 1 00:15:37.851 }, 00:15:37.851 { 00:15:37.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.851 "dma_device_type": 2 00:15:37.851 } 00:15:37.851 ], 00:15:37.851 "driver_specific": { 00:15:37.851 "passthru": { 00:15:37.851 "name": "pt1", 00:15:37.851 "base_bdev_name": "malloc1" 00:15:37.851 } 00:15:37.851 } 00:15:37.851 }' 00:15:37.851 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.851 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.108 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.366 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.366 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.366 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:38.366 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.625 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.625 "name": "pt2", 00:15:38.625 "aliases": [ 00:15:38.625 "00000000-0000-0000-0000-000000000002" 00:15:38.625 ], 00:15:38.625 "product_name": "passthru", 00:15:38.625 "block_size": 512, 00:15:38.625 "num_blocks": 65536, 00:15:38.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.625 "assigned_rate_limits": { 00:15:38.625 "rw_ios_per_sec": 0, 00:15:38.625 "rw_mbytes_per_sec": 0, 00:15:38.625 "r_mbytes_per_sec": 0, 00:15:38.625 "w_mbytes_per_sec": 0 00:15:38.625 }, 00:15:38.625 "claimed": true, 00:15:38.625 "claim_type": "exclusive_write", 00:15:38.625 "zoned": false, 00:15:38.625 "supported_io_types": { 00:15:38.625 "read": true, 00:15:38.625 "write": true, 00:15:38.625 "unmap": true, 00:15:38.625 "flush": true, 00:15:38.625 "reset": true, 00:15:38.625 "nvme_admin": false, 00:15:38.625 "nvme_io": false, 00:15:38.625 "nvme_io_md": false, 00:15:38.625 "write_zeroes": true, 00:15:38.625 "zcopy": true, 00:15:38.625 "get_zone_info": false, 00:15:38.625 "zone_management": false, 00:15:38.625 "zone_append": false, 00:15:38.625 "compare": false, 00:15:38.625 "compare_and_write": false, 00:15:38.625 "abort": true, 00:15:38.625 "seek_hole": false, 00:15:38.625 "seek_data": false, 00:15:38.625 "copy": true, 00:15:38.625 "nvme_iov_md": false 00:15:38.625 }, 00:15:38.625 "memory_domains": [ 00:15:38.625 { 00:15:38.625 "dma_device_id": "system", 00:15:38.625 "dma_device_type": 1 00:15:38.625 }, 00:15:38.625 { 00:15:38.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.625 "dma_device_type": 2 00:15:38.625 } 00:15:38.625 ], 00:15:38.625 "driver_specific": { 00:15:38.625 "passthru": { 00:15:38.625 "name": "pt2", 00:15:38.625 "base_bdev_name": "malloc2" 00:15:38.625 } 00:15:38.625 } 00:15:38.625 }' 00:15:38.625 18:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.625 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.625 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.625 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.625 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.625 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.625 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.625 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.884 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.884 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.884 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.884 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.884 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:38.884 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:39.143 [2024-07-25 18:42:39.589013] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 4c52082e-9a4b-4a3c-8111-81f441bb228f '!=' 4c52082e-9a4b-4a3c-8111-81f441bb228f ']' 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 120821 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 120821 ']' 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 120821 00:15:39.143 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:39.144 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.144 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 120821 00:15:39.144 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.144 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.144 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 120821' 00:15:39.144 killing process with pid 120821 00:15:39.144 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 120821 00:15:39.144 [2024-07-25 18:42:39.645523] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.144 18:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 120821 00:15:39.144 [2024-07-25 18:42:39.645707] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.144 [2024-07-25 18:42:39.645764] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.144 [2024-07-25 18:42:39.645787] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:15:39.402 [2024-07-25 18:42:39.817114] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.781 ************************************ 00:15:40.781 END TEST raid_superblock_test 00:15:40.781 ************************************ 00:15:40.781 18:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:40.781 00:15:40.781 real 0m11.128s 00:15:40.781 user 0m18.756s 00:15:40.781 sys 0m1.887s 00:15:40.781 18:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.781 18:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.781 18:42:41 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:15:40.781 18:42:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:40.781 18:42:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.781 18:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.781 ************************************ 00:15:40.781 START TEST raid_read_error_test 00:15:40.781 ************************************ 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.RsV0saQOFQ 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=121191 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 121191 /var/tmp/spdk-raid.sock 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 121191 ']' 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:40.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:40.781 18:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.781 [2024-07-25 18:42:41.166054] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:40.781 [2024-07-25 18:42:41.166514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121191 ] 00:15:40.781 [2024-07-25 18:42:41.346141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.040 [2024-07-25 18:42:41.596549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.299 [2024-07-25 18:42:41.870514] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.557 18:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.557 18:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:41.557 18:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:41.557 18:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.816 BaseBdev1_malloc 00:15:41.816 18:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:42.074 true 00:15:42.074 18:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:42.333 [2024-07-25 18:42:42.722150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:42.333 [2024-07-25 18:42:42.722435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.333 [2024-07-25 18:42:42.722527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:15:42.333 [2024-07-25 18:42:42.722634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.333 [2024-07-25 18:42:42.725234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.333 [2024-07-25 18:42:42.725386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:42.333 BaseBdev1 00:15:42.333 18:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:42.333 18:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:42.593 BaseBdev2_malloc 00:15:42.593 18:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:42.593 true 00:15:42.593 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:42.852 [2024-07-25 18:42:43.322833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:42.852 [2024-07-25 18:42:43.323169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.852 [2024-07-25 18:42:43.323248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:42.852 [2024-07-25 18:42:43.323496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.852 [2024-07-25 18:42:43.326145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.852 [2024-07-25 18:42:43.326334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:42.852 BaseBdev2 00:15:42.852 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:43.133 [2024-07-25 18:42:43.499053] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.133 [2024-07-25 18:42:43.501443] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.133 [2024-07-25 18:42:43.501826] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:15:43.133 [2024-07-25 18:42:43.501945] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.133 [2024-07-25 18:42:43.502099] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:43.133 [2024-07-25 18:42:43.502558] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:15:43.133 [2024-07-25 18:42:43.502698] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:15:43.133 [2024-07-25 18:42:43.502999] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.133 "name": "raid_bdev1", 00:15:43.133 "uuid": "b6e32dbb-d5b8-4977-baf4-1067680593d7", 00:15:43.133 "strip_size_kb": 64, 00:15:43.133 "state": "online", 00:15:43.133 "raid_level": "raid0", 00:15:43.133 "superblock": true, 00:15:43.133 "num_base_bdevs": 2, 00:15:43.133 "num_base_bdevs_discovered": 2, 00:15:43.133 "num_base_bdevs_operational": 2, 00:15:43.133 "base_bdevs_list": [ 00:15:43.133 { 00:15:43.133 "name": "BaseBdev1", 00:15:43.133 "uuid": "6bb54aeb-455e-5013-bfa8-f5241a43a1d9", 00:15:43.133 "is_configured": true, 00:15:43.133 "data_offset": 2048, 00:15:43.133 "data_size": 63488 00:15:43.133 }, 00:15:43.133 { 00:15:43.133 "name": "BaseBdev2", 00:15:43.133 "uuid": "473bfbe9-b7ef-5d2c-b9ac-939117651ede", 00:15:43.133 "is_configured": true, 00:15:43.133 "data_offset": 2048, 00:15:43.133 "data_size": 63488 00:15:43.133 } 00:15:43.133 ] 00:15:43.133 }' 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.133 18:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.700 18:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:43.700 18:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:43.959 [2024-07-25 18:42:44.336860] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:44.894 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.151 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.409 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.409 "name": "raid_bdev1", 00:15:45.409 "uuid": "b6e32dbb-d5b8-4977-baf4-1067680593d7", 00:15:45.409 "strip_size_kb": 64, 00:15:45.409 "state": "online", 00:15:45.409 "raid_level": "raid0", 00:15:45.409 "superblock": true, 00:15:45.409 "num_base_bdevs": 2, 00:15:45.409 "num_base_bdevs_discovered": 2, 00:15:45.409 "num_base_bdevs_operational": 2, 00:15:45.409 "base_bdevs_list": [ 00:15:45.409 { 00:15:45.409 "name": "BaseBdev1", 00:15:45.409 "uuid": "6bb54aeb-455e-5013-bfa8-f5241a43a1d9", 00:15:45.409 "is_configured": true, 00:15:45.409 "data_offset": 2048, 00:15:45.409 "data_size": 63488 00:15:45.409 }, 00:15:45.409 { 00:15:45.409 "name": "BaseBdev2", 00:15:45.409 "uuid": "473bfbe9-b7ef-5d2c-b9ac-939117651ede", 00:15:45.409 "is_configured": true, 00:15:45.409 "data_offset": 2048, 00:15:45.409 "data_size": 63488 00:15:45.409 } 00:15:45.409 ] 00:15:45.409 }' 00:15:45.409 18:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.409 18:42:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.976 18:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:45.976 [2024-07-25 18:42:46.532471] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.976 [2024-07-25 18:42:46.532761] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.976 [2024-07-25 18:42:46.535438] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.976 [2024-07-25 18:42:46.535600] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.976 [2024-07-25 18:42:46.535669] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.976 [2024-07-25 18:42:46.535745] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:15:45.976 0 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 121191 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 121191 ']' 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 121191 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121191 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121191' 00:15:46.235 killing process with pid 121191 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 121191 00:15:46.235 [2024-07-25 18:42:46.586479] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.235 18:42:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 121191 00:15:46.235 [2024-07-25 18:42:46.727169] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.RsV0saQOFQ 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:48.180 ************************************ 00:15:48.180 END TEST raid_read_error_test 00:15:48.180 ************************************ 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:15:48.180 00:15:48.180 real 0m7.179s 00:15:48.180 user 0m10.133s 00:15:48.180 sys 0m0.993s 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.180 18:42:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.180 18:42:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:15:48.180 18:42:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:48.180 18:42:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.180 18:42:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.180 ************************************ 00:15:48.180 START TEST raid_write_error_test 00:15:48.180 ************************************ 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.VSChJPp0cx 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=121389 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 121389 /var/tmp/spdk-raid.sock 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 121389 ']' 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:48.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.180 18:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.180 [2024-07-25 18:42:48.429385] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:48.180 [2024-07-25 18:42:48.429868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121389 ] 00:15:48.180 [2024-07-25 18:42:48.617494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.441 [2024-07-25 18:42:48.857410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.700 [2024-07-25 18:42:49.124433] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.959 18:42:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:48.959 18:42:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:48.959 18:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:48.959 18:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:49.217 BaseBdev1_malloc 00:15:49.217 18:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:49.476 true 00:15:49.476 18:42:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:49.735 [2024-07-25 18:42:50.087486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:49.735 [2024-07-25 18:42:50.087820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.735 [2024-07-25 18:42:50.087900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:15:49.735 [2024-07-25 18:42:50.088044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.735 [2024-07-25 18:42:50.090796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.735 [2024-07-25 18:42:50.090964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:49.735 BaseBdev1 00:15:49.735 18:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:49.735 18:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:49.994 BaseBdev2_malloc 00:15:49.994 18:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:50.253 true 00:15:50.253 18:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:50.511 [2024-07-25 18:42:50.892458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:50.511 [2024-07-25 18:42:50.892805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.511 [2024-07-25 18:42:50.892886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.511 [2024-07-25 18:42:50.892996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.511 [2024-07-25 18:42:50.895675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.511 [2024-07-25 18:42:50.895854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:50.511 BaseBdev2 00:15:50.511 18:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:50.770 [2024-07-25 18:42:51.132580] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.770 [2024-07-25 18:42:51.135107] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.770 [2024-07-25 18:42:51.135458] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:15:50.770 [2024-07-25 18:42:51.135595] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:50.770 [2024-07-25 18:42:51.135785] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:50.770 [2024-07-25 18:42:51.136230] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:15:50.770 [2024-07-25 18:42:51.136269] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:15:50.770 [2024-07-25 18:42:51.136690] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.770 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.770 "name": "raid_bdev1", 00:15:50.771 "uuid": "007e00b6-4186-450e-a0d3-d50fef7c84ae", 00:15:50.771 "strip_size_kb": 64, 00:15:50.771 "state": "online", 00:15:50.771 "raid_level": "raid0", 00:15:50.771 "superblock": true, 00:15:50.771 "num_base_bdevs": 2, 00:15:50.771 "num_base_bdevs_discovered": 2, 00:15:50.771 "num_base_bdevs_operational": 2, 00:15:50.771 "base_bdevs_list": [ 00:15:50.771 { 00:15:50.771 "name": "BaseBdev1", 00:15:50.771 "uuid": "13ef1dc5-4e7c-5500-b6a0-88f676dee2af", 00:15:50.771 "is_configured": true, 00:15:50.771 "data_offset": 2048, 00:15:50.771 "data_size": 63488 00:15:50.771 }, 00:15:50.771 { 00:15:50.771 "name": "BaseBdev2", 00:15:50.771 "uuid": "e42e278b-3176-5173-af8d-bfc91947f491", 00:15:50.771 "is_configured": true, 00:15:50.771 "data_offset": 2048, 00:15:50.771 "data_size": 63488 00:15:50.771 } 00:15:50.771 ] 00:15:50.771 }' 00:15:50.771 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.771 18:42:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.337 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:51.337 18:42:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:51.337 [2024-07-25 18:42:51.862459] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:52.274 18:42:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.533 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.792 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.792 "name": "raid_bdev1", 00:15:52.792 "uuid": "007e00b6-4186-450e-a0d3-d50fef7c84ae", 00:15:52.792 "strip_size_kb": 64, 00:15:52.792 "state": "online", 00:15:52.792 "raid_level": "raid0", 00:15:52.792 "superblock": true, 00:15:52.792 "num_base_bdevs": 2, 00:15:52.792 "num_base_bdevs_discovered": 2, 00:15:52.792 "num_base_bdevs_operational": 2, 00:15:52.792 "base_bdevs_list": [ 00:15:52.792 { 00:15:52.792 "name": "BaseBdev1", 00:15:52.792 "uuid": "13ef1dc5-4e7c-5500-b6a0-88f676dee2af", 00:15:52.792 "is_configured": true, 00:15:52.792 "data_offset": 2048, 00:15:52.792 "data_size": 63488 00:15:52.792 }, 00:15:52.792 { 00:15:52.792 "name": "BaseBdev2", 00:15:52.792 "uuid": "e42e278b-3176-5173-af8d-bfc91947f491", 00:15:52.792 "is_configured": true, 00:15:52.792 "data_offset": 2048, 00:15:52.792 "data_size": 63488 00:15:52.792 } 00:15:52.792 ] 00:15:52.792 }' 00:15:52.792 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.792 18:42:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.724 18:42:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:53.724 [2024-07-25 18:42:54.187955] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.724 [2024-07-25 18:42:54.188229] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.724 [2024-07-25 18:42:54.191024] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.724 [2024-07-25 18:42:54.191183] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.724 [2024-07-25 18:42:54.191256] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.724 [2024-07-25 18:42:54.191460] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:15:53.724 0 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 121389 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 121389 ']' 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 121389 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121389 00:15:53.724 killing process with pid 121389 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121389' 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 121389 00:15:53.724 18:42:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 121389 00:15:53.724 [2024-07-25 18:42:54.232554] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.982 [2024-07-25 18:42:54.374327] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.VSChJPp0cx 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:55.359 ************************************ 00:15:55.359 END TEST raid_write_error_test 00:15:55.359 ************************************ 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.43 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.43 != \0\.\0\0 ]] 00:15:55.359 00:15:55.359 real 0m7.580s 00:15:55.359 user 0m10.828s 00:15:55.359 sys 0m1.087s 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.359 18:42:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.619 18:42:55 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:55.619 18:42:55 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:55.619 18:42:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:55.619 18:42:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.619 18:42:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.619 ************************************ 00:15:55.619 START TEST raid_state_function_test 00:15:55.619 ************************************ 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=121578 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121578' 00:15:55.619 Process raid pid: 121578 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 121578 /var/tmp/spdk-raid.sock 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 121578 ']' 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:55.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.619 18:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.619 [2024-07-25 18:42:56.080255] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:55.619 [2024-07-25 18:42:56.080744] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.878 [2024-07-25 18:42:56.265390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.137 [2024-07-25 18:42:56.478350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.137 [2024-07-25 18:42:56.670004] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.396 18:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.396 18:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:56.396 18:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:56.655 [2024-07-25 18:42:57.152074] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.655 [2024-07-25 18:42:57.152381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.655 [2024-07-25 18:42:57.152487] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.655 [2024-07-25 18:42:57.152604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.655 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:56.655 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.655 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:56.655 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.656 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.914 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.914 "name": "Existed_Raid", 00:15:56.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.914 "strip_size_kb": 64, 00:15:56.914 "state": "configuring", 00:15:56.914 "raid_level": "concat", 00:15:56.914 "superblock": false, 00:15:56.914 "num_base_bdevs": 2, 00:15:56.914 "num_base_bdevs_discovered": 0, 00:15:56.914 "num_base_bdevs_operational": 2, 00:15:56.914 "base_bdevs_list": [ 00:15:56.914 { 00:15:56.914 "name": "BaseBdev1", 00:15:56.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.914 "is_configured": false, 00:15:56.914 "data_offset": 0, 00:15:56.914 "data_size": 0 00:15:56.914 }, 00:15:56.915 { 00:15:56.915 "name": "BaseBdev2", 00:15:56.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.915 "is_configured": false, 00:15:56.915 "data_offset": 0, 00:15:56.915 "data_size": 0 00:15:56.915 } 00:15:56.915 ] 00:15:56.915 }' 00:15:56.915 18:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.915 18:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.480 18:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:57.740 [2024-07-25 18:42:58.168156] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.740 [2024-07-25 18:42:58.168384] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:15:57.740 18:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:57.998 [2024-07-25 18:42:58.420225] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.998 [2024-07-25 18:42:58.420458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.998 [2024-07-25 18:42:58.420532] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.998 [2024-07-25 18:42:58.420589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.998 18:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.257 [2024-07-25 18:42:58.638902] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.257 BaseBdev1 00:15:58.257 18:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:58.257 18:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:58.257 18:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.257 18:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:58.257 18:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.257 18:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.257 18:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.516 18:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.775 [ 00:15:58.775 { 00:15:58.775 "name": "BaseBdev1", 00:15:58.775 "aliases": [ 00:15:58.775 "8d891355-8a96-467d-8a45-99392e6bb1f0" 00:15:58.775 ], 00:15:58.775 "product_name": "Malloc disk", 00:15:58.775 "block_size": 512, 00:15:58.775 "num_blocks": 65536, 00:15:58.775 "uuid": "8d891355-8a96-467d-8a45-99392e6bb1f0", 00:15:58.775 "assigned_rate_limits": { 00:15:58.775 "rw_ios_per_sec": 0, 00:15:58.775 "rw_mbytes_per_sec": 0, 00:15:58.775 "r_mbytes_per_sec": 0, 00:15:58.775 "w_mbytes_per_sec": 0 00:15:58.775 }, 00:15:58.775 "claimed": true, 00:15:58.775 "claim_type": "exclusive_write", 00:15:58.775 "zoned": false, 00:15:58.775 "supported_io_types": { 00:15:58.775 "read": true, 00:15:58.775 "write": true, 00:15:58.775 "unmap": true, 00:15:58.775 "flush": true, 00:15:58.775 "reset": true, 00:15:58.775 "nvme_admin": false, 00:15:58.775 "nvme_io": false, 00:15:58.775 "nvme_io_md": false, 00:15:58.775 "write_zeroes": true, 00:15:58.775 "zcopy": true, 00:15:58.775 "get_zone_info": false, 00:15:58.775 "zone_management": false, 00:15:58.775 "zone_append": false, 00:15:58.775 "compare": false, 00:15:58.775 "compare_and_write": false, 00:15:58.775 "abort": true, 00:15:58.775 "seek_hole": false, 00:15:58.775 "seek_data": false, 00:15:58.775 "copy": true, 00:15:58.775 "nvme_iov_md": false 00:15:58.775 }, 00:15:58.775 "memory_domains": [ 00:15:58.775 { 00:15:58.775 "dma_device_id": "system", 00:15:58.775 "dma_device_type": 1 00:15:58.775 }, 00:15:58.775 { 00:15:58.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.775 "dma_device_type": 2 00:15:58.775 } 00:15:58.775 ], 00:15:58.775 "driver_specific": {} 00:15:58.775 } 00:15:58.775 ] 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.775 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.034 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.034 "name": "Existed_Raid", 00:15:59.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.034 "strip_size_kb": 64, 00:15:59.034 "state": "configuring", 00:15:59.034 "raid_level": "concat", 00:15:59.034 "superblock": false, 00:15:59.034 "num_base_bdevs": 2, 00:15:59.034 "num_base_bdevs_discovered": 1, 00:15:59.034 "num_base_bdevs_operational": 2, 00:15:59.034 "base_bdevs_list": [ 00:15:59.034 { 00:15:59.034 "name": "BaseBdev1", 00:15:59.034 "uuid": "8d891355-8a96-467d-8a45-99392e6bb1f0", 00:15:59.034 "is_configured": true, 00:15:59.034 "data_offset": 0, 00:15:59.034 "data_size": 65536 00:15:59.034 }, 00:15:59.034 { 00:15:59.034 "name": "BaseBdev2", 00:15:59.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.034 "is_configured": false, 00:15:59.034 "data_offset": 0, 00:15:59.034 "data_size": 0 00:15:59.034 } 00:15:59.034 ] 00:15:59.034 }' 00:15:59.034 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.034 18:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.602 18:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.602 [2024-07-25 18:43:00.063210] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.602 [2024-07-25 18:43:00.063483] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:15:59.602 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:59.860 [2024-07-25 18:43:00.243311] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.860 [2024-07-25 18:43:00.245744] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.860 [2024-07-25 18:43:00.245937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.860 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.119 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.119 "name": "Existed_Raid", 00:16:00.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.119 "strip_size_kb": 64, 00:16:00.119 "state": "configuring", 00:16:00.119 "raid_level": "concat", 00:16:00.119 "superblock": false, 00:16:00.119 "num_base_bdevs": 2, 00:16:00.119 "num_base_bdevs_discovered": 1, 00:16:00.119 "num_base_bdevs_operational": 2, 00:16:00.119 "base_bdevs_list": [ 00:16:00.119 { 00:16:00.119 "name": "BaseBdev1", 00:16:00.119 "uuid": "8d891355-8a96-467d-8a45-99392e6bb1f0", 00:16:00.119 "is_configured": true, 00:16:00.119 "data_offset": 0, 00:16:00.119 "data_size": 65536 00:16:00.119 }, 00:16:00.119 { 00:16:00.119 "name": "BaseBdev2", 00:16:00.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.119 "is_configured": false, 00:16:00.119 "data_offset": 0, 00:16:00.119 "data_size": 0 00:16:00.119 } 00:16:00.119 ] 00:16:00.119 }' 00:16:00.119 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.119 18:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.686 18:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:00.686 [2024-07-25 18:43:01.195509] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.686 [2024-07-25 18:43:01.195781] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:00.686 [2024-07-25 18:43:01.195822] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:00.686 [2024-07-25 18:43:01.196061] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:00.686 [2024-07-25 18:43:01.196514] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:00.686 [2024-07-25 18:43:01.196623] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:16:00.686 [2024-07-25 18:43:01.197024] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.686 BaseBdev2 00:16:00.686 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:00.686 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:00.686 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.686 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:00.686 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.686 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.686 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.945 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.204 [ 00:16:01.204 { 00:16:01.204 "name": "BaseBdev2", 00:16:01.204 "aliases": [ 00:16:01.204 "e490eda9-bad9-4c73-ba9b-f6660660c4ae" 00:16:01.204 ], 00:16:01.204 "product_name": "Malloc disk", 00:16:01.204 "block_size": 512, 00:16:01.204 "num_blocks": 65536, 00:16:01.204 "uuid": "e490eda9-bad9-4c73-ba9b-f6660660c4ae", 00:16:01.204 "assigned_rate_limits": { 00:16:01.204 "rw_ios_per_sec": 0, 00:16:01.204 "rw_mbytes_per_sec": 0, 00:16:01.204 "r_mbytes_per_sec": 0, 00:16:01.204 "w_mbytes_per_sec": 0 00:16:01.204 }, 00:16:01.204 "claimed": true, 00:16:01.204 "claim_type": "exclusive_write", 00:16:01.204 "zoned": false, 00:16:01.204 "supported_io_types": { 00:16:01.204 "read": true, 00:16:01.204 "write": true, 00:16:01.204 "unmap": true, 00:16:01.204 "flush": true, 00:16:01.204 "reset": true, 00:16:01.204 "nvme_admin": false, 00:16:01.204 "nvme_io": false, 00:16:01.204 "nvme_io_md": false, 00:16:01.204 "write_zeroes": true, 00:16:01.204 "zcopy": true, 00:16:01.204 "get_zone_info": false, 00:16:01.204 "zone_management": false, 00:16:01.204 "zone_append": false, 00:16:01.204 "compare": false, 00:16:01.204 "compare_and_write": false, 00:16:01.204 "abort": true, 00:16:01.204 "seek_hole": false, 00:16:01.204 "seek_data": false, 00:16:01.204 "copy": true, 00:16:01.204 "nvme_iov_md": false 00:16:01.204 }, 00:16:01.204 "memory_domains": [ 00:16:01.204 { 00:16:01.204 "dma_device_id": "system", 00:16:01.204 "dma_device_type": 1 00:16:01.204 }, 00:16:01.204 { 00:16:01.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.204 "dma_device_type": 2 00:16:01.204 } 00:16:01.204 ], 00:16:01.204 "driver_specific": {} 00:16:01.204 } 00:16:01.204 ] 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.204 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.462 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.462 "name": "Existed_Raid", 00:16:01.462 "uuid": "a8b97d24-5af7-4061-adab-afc2cccbf175", 00:16:01.462 "strip_size_kb": 64, 00:16:01.462 "state": "online", 00:16:01.462 "raid_level": "concat", 00:16:01.462 "superblock": false, 00:16:01.462 "num_base_bdevs": 2, 00:16:01.462 "num_base_bdevs_discovered": 2, 00:16:01.462 "num_base_bdevs_operational": 2, 00:16:01.462 "base_bdevs_list": [ 00:16:01.462 { 00:16:01.462 "name": "BaseBdev1", 00:16:01.462 "uuid": "8d891355-8a96-467d-8a45-99392e6bb1f0", 00:16:01.462 "is_configured": true, 00:16:01.462 "data_offset": 0, 00:16:01.462 "data_size": 65536 00:16:01.462 }, 00:16:01.462 { 00:16:01.462 "name": "BaseBdev2", 00:16:01.462 "uuid": "e490eda9-bad9-4c73-ba9b-f6660660c4ae", 00:16:01.462 "is_configured": true, 00:16:01.462 "data_offset": 0, 00:16:01.462 "data_size": 65536 00:16:01.462 } 00:16:01.462 ] 00:16:01.462 }' 00:16:01.462 18:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.462 18:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:02.030 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:02.030 [2024-07-25 18:43:02.592032] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:02.288 "name": "Existed_Raid", 00:16:02.288 "aliases": [ 00:16:02.288 "a8b97d24-5af7-4061-adab-afc2cccbf175" 00:16:02.288 ], 00:16:02.288 "product_name": "Raid Volume", 00:16:02.288 "block_size": 512, 00:16:02.288 "num_blocks": 131072, 00:16:02.288 "uuid": "a8b97d24-5af7-4061-adab-afc2cccbf175", 00:16:02.288 "assigned_rate_limits": { 00:16:02.288 "rw_ios_per_sec": 0, 00:16:02.288 "rw_mbytes_per_sec": 0, 00:16:02.288 "r_mbytes_per_sec": 0, 00:16:02.288 "w_mbytes_per_sec": 0 00:16:02.288 }, 00:16:02.288 "claimed": false, 00:16:02.288 "zoned": false, 00:16:02.288 "supported_io_types": { 00:16:02.288 "read": true, 00:16:02.288 "write": true, 00:16:02.288 "unmap": true, 00:16:02.288 "flush": true, 00:16:02.288 "reset": true, 00:16:02.288 "nvme_admin": false, 00:16:02.288 "nvme_io": false, 00:16:02.288 "nvme_io_md": false, 00:16:02.288 "write_zeroes": true, 00:16:02.288 "zcopy": false, 00:16:02.288 "get_zone_info": false, 00:16:02.288 "zone_management": false, 00:16:02.288 "zone_append": false, 00:16:02.288 "compare": false, 00:16:02.288 "compare_and_write": false, 00:16:02.288 "abort": false, 00:16:02.288 "seek_hole": false, 00:16:02.288 "seek_data": false, 00:16:02.288 "copy": false, 00:16:02.288 "nvme_iov_md": false 00:16:02.288 }, 00:16:02.288 "memory_domains": [ 00:16:02.288 { 00:16:02.288 "dma_device_id": "system", 00:16:02.288 "dma_device_type": 1 00:16:02.288 }, 00:16:02.288 { 00:16:02.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.288 "dma_device_type": 2 00:16:02.288 }, 00:16:02.288 { 00:16:02.288 "dma_device_id": "system", 00:16:02.288 "dma_device_type": 1 00:16:02.288 }, 00:16:02.288 { 00:16:02.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.288 "dma_device_type": 2 00:16:02.288 } 00:16:02.288 ], 00:16:02.288 "driver_specific": { 00:16:02.288 "raid": { 00:16:02.288 "uuid": "a8b97d24-5af7-4061-adab-afc2cccbf175", 00:16:02.288 "strip_size_kb": 64, 00:16:02.288 "state": "online", 00:16:02.288 "raid_level": "concat", 00:16:02.288 "superblock": false, 00:16:02.288 "num_base_bdevs": 2, 00:16:02.288 "num_base_bdevs_discovered": 2, 00:16:02.288 "num_base_bdevs_operational": 2, 00:16:02.288 "base_bdevs_list": [ 00:16:02.288 { 00:16:02.288 "name": "BaseBdev1", 00:16:02.288 "uuid": "8d891355-8a96-467d-8a45-99392e6bb1f0", 00:16:02.288 "is_configured": true, 00:16:02.288 "data_offset": 0, 00:16:02.288 "data_size": 65536 00:16:02.288 }, 00:16:02.288 { 00:16:02.288 "name": "BaseBdev2", 00:16:02.288 "uuid": "e490eda9-bad9-4c73-ba9b-f6660660c4ae", 00:16:02.288 "is_configured": true, 00:16:02.288 "data_offset": 0, 00:16:02.288 "data_size": 65536 00:16:02.288 } 00:16:02.288 ] 00:16:02.288 } 00:16:02.288 } 00:16:02.288 }' 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:02.288 BaseBdev2' 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.288 "name": "BaseBdev1", 00:16:02.288 "aliases": [ 00:16:02.288 "8d891355-8a96-467d-8a45-99392e6bb1f0" 00:16:02.288 ], 00:16:02.288 "product_name": "Malloc disk", 00:16:02.288 "block_size": 512, 00:16:02.288 "num_blocks": 65536, 00:16:02.288 "uuid": "8d891355-8a96-467d-8a45-99392e6bb1f0", 00:16:02.288 "assigned_rate_limits": { 00:16:02.288 "rw_ios_per_sec": 0, 00:16:02.288 "rw_mbytes_per_sec": 0, 00:16:02.288 "r_mbytes_per_sec": 0, 00:16:02.288 "w_mbytes_per_sec": 0 00:16:02.288 }, 00:16:02.288 "claimed": true, 00:16:02.288 "claim_type": "exclusive_write", 00:16:02.288 "zoned": false, 00:16:02.288 "supported_io_types": { 00:16:02.288 "read": true, 00:16:02.288 "write": true, 00:16:02.288 "unmap": true, 00:16:02.288 "flush": true, 00:16:02.288 "reset": true, 00:16:02.288 "nvme_admin": false, 00:16:02.288 "nvme_io": false, 00:16:02.288 "nvme_io_md": false, 00:16:02.288 "write_zeroes": true, 00:16:02.288 "zcopy": true, 00:16:02.288 "get_zone_info": false, 00:16:02.288 "zone_management": false, 00:16:02.288 "zone_append": false, 00:16:02.288 "compare": false, 00:16:02.288 "compare_and_write": false, 00:16:02.288 "abort": true, 00:16:02.288 "seek_hole": false, 00:16:02.288 "seek_data": false, 00:16:02.288 "copy": true, 00:16:02.288 "nvme_iov_md": false 00:16:02.288 }, 00:16:02.288 "memory_domains": [ 00:16:02.288 { 00:16:02.288 "dma_device_id": "system", 00:16:02.288 "dma_device_type": 1 00:16:02.288 }, 00:16:02.288 { 00:16:02.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.288 "dma_device_type": 2 00:16:02.288 } 00:16:02.288 ], 00:16:02.288 "driver_specific": {} 00:16:02.288 }' 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.288 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.547 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.547 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.547 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.547 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:02.547 18:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.547 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.547 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:02.547 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.547 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.806 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.806 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.806 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:02.806 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:03.064 "name": "BaseBdev2", 00:16:03.064 "aliases": [ 00:16:03.064 "e490eda9-bad9-4c73-ba9b-f6660660c4ae" 00:16:03.064 ], 00:16:03.064 "product_name": "Malloc disk", 00:16:03.064 "block_size": 512, 00:16:03.064 "num_blocks": 65536, 00:16:03.064 "uuid": "e490eda9-bad9-4c73-ba9b-f6660660c4ae", 00:16:03.064 "assigned_rate_limits": { 00:16:03.064 "rw_ios_per_sec": 0, 00:16:03.064 "rw_mbytes_per_sec": 0, 00:16:03.064 "r_mbytes_per_sec": 0, 00:16:03.064 "w_mbytes_per_sec": 0 00:16:03.064 }, 00:16:03.064 "claimed": true, 00:16:03.064 "claim_type": "exclusive_write", 00:16:03.064 "zoned": false, 00:16:03.064 "supported_io_types": { 00:16:03.064 "read": true, 00:16:03.064 "write": true, 00:16:03.064 "unmap": true, 00:16:03.064 "flush": true, 00:16:03.064 "reset": true, 00:16:03.064 "nvme_admin": false, 00:16:03.064 "nvme_io": false, 00:16:03.064 "nvme_io_md": false, 00:16:03.064 "write_zeroes": true, 00:16:03.064 "zcopy": true, 00:16:03.064 "get_zone_info": false, 00:16:03.064 "zone_management": false, 00:16:03.064 "zone_append": false, 00:16:03.064 "compare": false, 00:16:03.064 "compare_and_write": false, 00:16:03.064 "abort": true, 00:16:03.064 "seek_hole": false, 00:16:03.064 "seek_data": false, 00:16:03.064 "copy": true, 00:16:03.064 "nvme_iov_md": false 00:16:03.064 }, 00:16:03.064 "memory_domains": [ 00:16:03.064 { 00:16:03.064 "dma_device_id": "system", 00:16:03.064 "dma_device_type": 1 00:16:03.064 }, 00:16:03.064 { 00:16:03.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.064 "dma_device_type": 2 00:16:03.064 } 00:16:03.064 ], 00:16:03.064 "driver_specific": {} 00:16:03.064 }' 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.064 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.322 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.322 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.322 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.322 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.322 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.322 18:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:03.580 [2024-07-25 18:43:04.068187] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.580 [2024-07-25 18:43:04.068411] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.580 [2024-07-25 18:43:04.068616] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.838 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.096 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.096 "name": "Existed_Raid", 00:16:04.096 "uuid": "a8b97d24-5af7-4061-adab-afc2cccbf175", 00:16:04.096 "strip_size_kb": 64, 00:16:04.096 "state": "offline", 00:16:04.096 "raid_level": "concat", 00:16:04.096 "superblock": false, 00:16:04.096 "num_base_bdevs": 2, 00:16:04.096 "num_base_bdevs_discovered": 1, 00:16:04.096 "num_base_bdevs_operational": 1, 00:16:04.096 "base_bdevs_list": [ 00:16:04.096 { 00:16:04.096 "name": null, 00:16:04.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.096 "is_configured": false, 00:16:04.096 "data_offset": 0, 00:16:04.096 "data_size": 65536 00:16:04.096 }, 00:16:04.096 { 00:16:04.096 "name": "BaseBdev2", 00:16:04.096 "uuid": "e490eda9-bad9-4c73-ba9b-f6660660c4ae", 00:16:04.096 "is_configured": true, 00:16:04.096 "data_offset": 0, 00:16:04.096 "data_size": 65536 00:16:04.096 } 00:16:04.096 ] 00:16:04.096 }' 00:16:04.096 18:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.096 18:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.665 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:04.665 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.665 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.665 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:04.665 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:04.665 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.665 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:04.948 [2024-07-25 18:43:05.368698] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:04.948 [2024-07-25 18:43:05.368977] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:16:04.948 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:04.948 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.948 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:04.948 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 121578 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 121578 ']' 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 121578 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121578 00:16:05.228 killing process with pid 121578 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121578' 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 121578 00:16:05.228 18:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 121578 00:16:05.228 [2024-07-25 18:43:05.702409] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.228 [2024-07-25 18:43:05.702544] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.600 ************************************ 00:16:06.600 END TEST raid_state_function_test 00:16:06.600 ************************************ 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:06.600 00:16:06.600 real 0m10.895s 00:16:06.600 user 0m18.316s 00:16:06.600 sys 0m1.897s 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.600 18:43:06 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:16:06.600 18:43:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:06.600 18:43:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.600 18:43:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.600 ************************************ 00:16:06.600 START TEST raid_state_function_test_sb 00:16:06.600 ************************************ 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:06.600 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=121954 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121954' 00:16:06.601 Process raid pid: 121954 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 121954 /var/tmp/spdk-raid.sock 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 121954 ']' 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.601 18:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.601 [2024-07-25 18:43:07.053484] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:06.601 [2024-07-25 18:43:07.054930] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.858 [2024-07-25 18:43:07.240691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.115 [2024-07-25 18:43:07.435793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.115 [2024-07-25 18:43:07.630105] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.373 18:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.373 18:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:07.373 18:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:07.631 [2024-07-25 18:43:08.126541] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.631 [2024-07-25 18:43:08.126870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.631 [2024-07-25 18:43:08.126960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.631 [2024-07-25 18:43:08.127021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.631 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.890 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.890 "name": "Existed_Raid", 00:16:07.890 "uuid": "56b12b35-e205-41aa-ba41-f10753c94e31", 00:16:07.890 "strip_size_kb": 64, 00:16:07.890 "state": "configuring", 00:16:07.890 "raid_level": "concat", 00:16:07.890 "superblock": true, 00:16:07.890 "num_base_bdevs": 2, 00:16:07.890 "num_base_bdevs_discovered": 0, 00:16:07.890 "num_base_bdevs_operational": 2, 00:16:07.890 "base_bdevs_list": [ 00:16:07.890 { 00:16:07.890 "name": "BaseBdev1", 00:16:07.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.890 "is_configured": false, 00:16:07.890 "data_offset": 0, 00:16:07.890 "data_size": 0 00:16:07.890 }, 00:16:07.890 { 00:16:07.890 "name": "BaseBdev2", 00:16:07.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.890 "is_configured": false, 00:16:07.890 "data_offset": 0, 00:16:07.890 "data_size": 0 00:16:07.890 } 00:16:07.890 ] 00:16:07.890 }' 00:16:07.890 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.890 18:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.457 18:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.716 [2024-07-25 18:43:09.154653] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.716 [2024-07-25 18:43:09.154907] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:16:08.716 18:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:08.975 [2024-07-25 18:43:09.338709] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.975 [2024-07-25 18:43:09.338938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.975 [2024-07-25 18:43:09.339039] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.975 [2024-07-25 18:43:09.339141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.975 18:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.233 [2024-07-25 18:43:09.634459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.233 BaseBdev1 00:16:09.233 18:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:09.233 18:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:09.233 18:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.233 18:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:09.233 18:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.233 18:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.233 18:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.492 18:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.492 [ 00:16:09.492 { 00:16:09.492 "name": "BaseBdev1", 00:16:09.492 "aliases": [ 00:16:09.492 "57633996-b5f8-4c3f-938e-2e5faef6979b" 00:16:09.492 ], 00:16:09.492 "product_name": "Malloc disk", 00:16:09.492 "block_size": 512, 00:16:09.492 "num_blocks": 65536, 00:16:09.492 "uuid": "57633996-b5f8-4c3f-938e-2e5faef6979b", 00:16:09.492 "assigned_rate_limits": { 00:16:09.492 "rw_ios_per_sec": 0, 00:16:09.492 "rw_mbytes_per_sec": 0, 00:16:09.492 "r_mbytes_per_sec": 0, 00:16:09.492 "w_mbytes_per_sec": 0 00:16:09.492 }, 00:16:09.492 "claimed": true, 00:16:09.492 "claim_type": "exclusive_write", 00:16:09.492 "zoned": false, 00:16:09.492 "supported_io_types": { 00:16:09.492 "read": true, 00:16:09.492 "write": true, 00:16:09.492 "unmap": true, 00:16:09.492 "flush": true, 00:16:09.492 "reset": true, 00:16:09.492 "nvme_admin": false, 00:16:09.492 "nvme_io": false, 00:16:09.492 "nvme_io_md": false, 00:16:09.492 "write_zeroes": true, 00:16:09.492 "zcopy": true, 00:16:09.492 "get_zone_info": false, 00:16:09.492 "zone_management": false, 00:16:09.492 "zone_append": false, 00:16:09.492 "compare": false, 00:16:09.492 "compare_and_write": false, 00:16:09.492 "abort": true, 00:16:09.492 "seek_hole": false, 00:16:09.492 "seek_data": false, 00:16:09.492 "copy": true, 00:16:09.492 "nvme_iov_md": false 00:16:09.492 }, 00:16:09.492 "memory_domains": [ 00:16:09.492 { 00:16:09.492 "dma_device_id": "system", 00:16:09.492 "dma_device_type": 1 00:16:09.492 }, 00:16:09.492 { 00:16:09.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.492 "dma_device_type": 2 00:16:09.492 } 00:16:09.492 ], 00:16:09.492 "driver_specific": {} 00:16:09.492 } 00:16:09.492 ] 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.492 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.767 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.767 "name": "Existed_Raid", 00:16:09.767 "uuid": "ec723d62-a5e0-406d-8fc2-c13dce58fb8b", 00:16:09.767 "strip_size_kb": 64, 00:16:09.767 "state": "configuring", 00:16:09.767 "raid_level": "concat", 00:16:09.767 "superblock": true, 00:16:09.767 "num_base_bdevs": 2, 00:16:09.767 "num_base_bdevs_discovered": 1, 00:16:09.767 "num_base_bdevs_operational": 2, 00:16:09.767 "base_bdevs_list": [ 00:16:09.767 { 00:16:09.767 "name": "BaseBdev1", 00:16:09.767 "uuid": "57633996-b5f8-4c3f-938e-2e5faef6979b", 00:16:09.767 "is_configured": true, 00:16:09.767 "data_offset": 2048, 00:16:09.767 "data_size": 63488 00:16:09.767 }, 00:16:09.767 { 00:16:09.767 "name": "BaseBdev2", 00:16:09.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.767 "is_configured": false, 00:16:09.767 "data_offset": 0, 00:16:09.767 "data_size": 0 00:16:09.767 } 00:16:09.767 ] 00:16:09.767 }' 00:16:09.767 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.767 18:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.334 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.334 [2024-07-25 18:43:10.890744] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.334 [2024-07-25 18:43:10.890993] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:16:10.334 18:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:10.593 [2024-07-25 18:43:11.066835] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.593 [2024-07-25 18:43:11.069243] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.593 [2024-07-25 18:43:11.069420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.593 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.851 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.851 "name": "Existed_Raid", 00:16:10.851 "uuid": "d6218149-baf9-4ee6-98f8-15fff99f7f87", 00:16:10.851 "strip_size_kb": 64, 00:16:10.851 "state": "configuring", 00:16:10.851 "raid_level": "concat", 00:16:10.851 "superblock": true, 00:16:10.851 "num_base_bdevs": 2, 00:16:10.851 "num_base_bdevs_discovered": 1, 00:16:10.851 "num_base_bdevs_operational": 2, 00:16:10.851 "base_bdevs_list": [ 00:16:10.851 { 00:16:10.851 "name": "BaseBdev1", 00:16:10.851 "uuid": "57633996-b5f8-4c3f-938e-2e5faef6979b", 00:16:10.851 "is_configured": true, 00:16:10.851 "data_offset": 2048, 00:16:10.851 "data_size": 63488 00:16:10.851 }, 00:16:10.851 { 00:16:10.851 "name": "BaseBdev2", 00:16:10.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.851 "is_configured": false, 00:16:10.851 "data_offset": 0, 00:16:10.851 "data_size": 0 00:16:10.851 } 00:16:10.851 ] 00:16:10.851 }' 00:16:10.851 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.851 18:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.419 18:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:11.679 [2024-07-25 18:43:12.019744] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.679 [2024-07-25 18:43:12.020273] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:11.679 [2024-07-25 18:43:12.020383] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:11.679 [2024-07-25 18:43:12.020558] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:11.679 [2024-07-25 18:43:12.020998] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:11.679 [2024-07-25 18:43:12.021039] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:16:11.679 [2024-07-25 18:43:12.021275] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.679 BaseBdev2 00:16:11.679 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:11.679 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:11.679 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:11.679 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:11.679 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:11.679 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:11.679 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.941 [ 00:16:11.941 { 00:16:11.941 "name": "BaseBdev2", 00:16:11.941 "aliases": [ 00:16:11.941 "5daad860-2efa-454b-b838-8cb35d0eb3ec" 00:16:11.941 ], 00:16:11.941 "product_name": "Malloc disk", 00:16:11.941 "block_size": 512, 00:16:11.941 "num_blocks": 65536, 00:16:11.941 "uuid": "5daad860-2efa-454b-b838-8cb35d0eb3ec", 00:16:11.941 "assigned_rate_limits": { 00:16:11.941 "rw_ios_per_sec": 0, 00:16:11.941 "rw_mbytes_per_sec": 0, 00:16:11.941 "r_mbytes_per_sec": 0, 00:16:11.941 "w_mbytes_per_sec": 0 00:16:11.941 }, 00:16:11.941 "claimed": true, 00:16:11.941 "claim_type": "exclusive_write", 00:16:11.941 "zoned": false, 00:16:11.941 "supported_io_types": { 00:16:11.941 "read": true, 00:16:11.941 "write": true, 00:16:11.941 "unmap": true, 00:16:11.941 "flush": true, 00:16:11.941 "reset": true, 00:16:11.941 "nvme_admin": false, 00:16:11.941 "nvme_io": false, 00:16:11.941 "nvme_io_md": false, 00:16:11.941 "write_zeroes": true, 00:16:11.941 "zcopy": true, 00:16:11.941 "get_zone_info": false, 00:16:11.941 "zone_management": false, 00:16:11.941 "zone_append": false, 00:16:11.941 "compare": false, 00:16:11.941 "compare_and_write": false, 00:16:11.941 "abort": true, 00:16:11.941 "seek_hole": false, 00:16:11.941 "seek_data": false, 00:16:11.941 "copy": true, 00:16:11.941 "nvme_iov_md": false 00:16:11.941 }, 00:16:11.941 "memory_domains": [ 00:16:11.941 { 00:16:11.941 "dma_device_id": "system", 00:16:11.941 "dma_device_type": 1 00:16:11.941 }, 00:16:11.941 { 00:16:11.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.941 "dma_device_type": 2 00:16:11.941 } 00:16:11.941 ], 00:16:11.941 "driver_specific": {} 00:16:11.941 } 00:16:11.941 ] 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.941 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.200 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.200 "name": "Existed_Raid", 00:16:12.200 "uuid": "d6218149-baf9-4ee6-98f8-15fff99f7f87", 00:16:12.200 "strip_size_kb": 64, 00:16:12.200 "state": "online", 00:16:12.200 "raid_level": "concat", 00:16:12.200 "superblock": true, 00:16:12.200 "num_base_bdevs": 2, 00:16:12.200 "num_base_bdevs_discovered": 2, 00:16:12.200 "num_base_bdevs_operational": 2, 00:16:12.200 "base_bdevs_list": [ 00:16:12.200 { 00:16:12.200 "name": "BaseBdev1", 00:16:12.200 "uuid": "57633996-b5f8-4c3f-938e-2e5faef6979b", 00:16:12.200 "is_configured": true, 00:16:12.200 "data_offset": 2048, 00:16:12.200 "data_size": 63488 00:16:12.200 }, 00:16:12.200 { 00:16:12.200 "name": "BaseBdev2", 00:16:12.200 "uuid": "5daad860-2efa-454b-b838-8cb35d0eb3ec", 00:16:12.200 "is_configured": true, 00:16:12.200 "data_offset": 2048, 00:16:12.200 "data_size": 63488 00:16:12.200 } 00:16:12.200 ] 00:16:12.200 }' 00:16:12.200 18:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.200 18:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:12.768 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:13.027 [2024-07-25 18:43:13.440285] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.027 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:13.027 "name": "Existed_Raid", 00:16:13.027 "aliases": [ 00:16:13.027 "d6218149-baf9-4ee6-98f8-15fff99f7f87" 00:16:13.027 ], 00:16:13.027 "product_name": "Raid Volume", 00:16:13.027 "block_size": 512, 00:16:13.027 "num_blocks": 126976, 00:16:13.027 "uuid": "d6218149-baf9-4ee6-98f8-15fff99f7f87", 00:16:13.027 "assigned_rate_limits": { 00:16:13.027 "rw_ios_per_sec": 0, 00:16:13.027 "rw_mbytes_per_sec": 0, 00:16:13.027 "r_mbytes_per_sec": 0, 00:16:13.027 "w_mbytes_per_sec": 0 00:16:13.027 }, 00:16:13.027 "claimed": false, 00:16:13.027 "zoned": false, 00:16:13.027 "supported_io_types": { 00:16:13.027 "read": true, 00:16:13.027 "write": true, 00:16:13.027 "unmap": true, 00:16:13.027 "flush": true, 00:16:13.027 "reset": true, 00:16:13.027 "nvme_admin": false, 00:16:13.027 "nvme_io": false, 00:16:13.027 "nvme_io_md": false, 00:16:13.027 "write_zeroes": true, 00:16:13.027 "zcopy": false, 00:16:13.027 "get_zone_info": false, 00:16:13.027 "zone_management": false, 00:16:13.027 "zone_append": false, 00:16:13.027 "compare": false, 00:16:13.027 "compare_and_write": false, 00:16:13.027 "abort": false, 00:16:13.027 "seek_hole": false, 00:16:13.027 "seek_data": false, 00:16:13.027 "copy": false, 00:16:13.027 "nvme_iov_md": false 00:16:13.027 }, 00:16:13.027 "memory_domains": [ 00:16:13.027 { 00:16:13.027 "dma_device_id": "system", 00:16:13.027 "dma_device_type": 1 00:16:13.027 }, 00:16:13.027 { 00:16:13.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.027 "dma_device_type": 2 00:16:13.027 }, 00:16:13.027 { 00:16:13.027 "dma_device_id": "system", 00:16:13.027 "dma_device_type": 1 00:16:13.027 }, 00:16:13.027 { 00:16:13.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.027 "dma_device_type": 2 00:16:13.027 } 00:16:13.027 ], 00:16:13.027 "driver_specific": { 00:16:13.027 "raid": { 00:16:13.027 "uuid": "d6218149-baf9-4ee6-98f8-15fff99f7f87", 00:16:13.027 "strip_size_kb": 64, 00:16:13.027 "state": "online", 00:16:13.027 "raid_level": "concat", 00:16:13.027 "superblock": true, 00:16:13.027 "num_base_bdevs": 2, 00:16:13.027 "num_base_bdevs_discovered": 2, 00:16:13.027 "num_base_bdevs_operational": 2, 00:16:13.027 "base_bdevs_list": [ 00:16:13.027 { 00:16:13.027 "name": "BaseBdev1", 00:16:13.027 "uuid": "57633996-b5f8-4c3f-938e-2e5faef6979b", 00:16:13.027 "is_configured": true, 00:16:13.027 "data_offset": 2048, 00:16:13.027 "data_size": 63488 00:16:13.027 }, 00:16:13.027 { 00:16:13.027 "name": "BaseBdev2", 00:16:13.027 "uuid": "5daad860-2efa-454b-b838-8cb35d0eb3ec", 00:16:13.027 "is_configured": true, 00:16:13.027 "data_offset": 2048, 00:16:13.027 "data_size": 63488 00:16:13.027 } 00:16:13.027 ] 00:16:13.027 } 00:16:13.027 } 00:16:13.027 }' 00:16:13.027 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.027 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:13.027 BaseBdev2' 00:16:13.027 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:13.027 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:13.027 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:13.286 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:13.286 "name": "BaseBdev1", 00:16:13.286 "aliases": [ 00:16:13.286 "57633996-b5f8-4c3f-938e-2e5faef6979b" 00:16:13.286 ], 00:16:13.286 "product_name": "Malloc disk", 00:16:13.286 "block_size": 512, 00:16:13.286 "num_blocks": 65536, 00:16:13.286 "uuid": "57633996-b5f8-4c3f-938e-2e5faef6979b", 00:16:13.286 "assigned_rate_limits": { 00:16:13.286 "rw_ios_per_sec": 0, 00:16:13.286 "rw_mbytes_per_sec": 0, 00:16:13.286 "r_mbytes_per_sec": 0, 00:16:13.286 "w_mbytes_per_sec": 0 00:16:13.286 }, 00:16:13.286 "claimed": true, 00:16:13.286 "claim_type": "exclusive_write", 00:16:13.286 "zoned": false, 00:16:13.286 "supported_io_types": { 00:16:13.286 "read": true, 00:16:13.286 "write": true, 00:16:13.286 "unmap": true, 00:16:13.286 "flush": true, 00:16:13.286 "reset": true, 00:16:13.286 "nvme_admin": false, 00:16:13.286 "nvme_io": false, 00:16:13.286 "nvme_io_md": false, 00:16:13.286 "write_zeroes": true, 00:16:13.286 "zcopy": true, 00:16:13.286 "get_zone_info": false, 00:16:13.286 "zone_management": false, 00:16:13.286 "zone_append": false, 00:16:13.286 "compare": false, 00:16:13.286 "compare_and_write": false, 00:16:13.286 "abort": true, 00:16:13.286 "seek_hole": false, 00:16:13.286 "seek_data": false, 00:16:13.286 "copy": true, 00:16:13.286 "nvme_iov_md": false 00:16:13.286 }, 00:16:13.286 "memory_domains": [ 00:16:13.286 { 00:16:13.286 "dma_device_id": "system", 00:16:13.286 "dma_device_type": 1 00:16:13.286 }, 00:16:13.286 { 00:16:13.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.286 "dma_device_type": 2 00:16:13.286 } 00:16:13.286 ], 00:16:13.286 "driver_specific": {} 00:16:13.286 }' 00:16:13.286 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.286 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.286 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:13.286 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.545 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.545 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:13.545 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.545 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.546 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:13.546 18:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.546 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.546 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:13.546 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:13.546 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:13.546 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:13.805 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:13.805 "name": "BaseBdev2", 00:16:13.805 "aliases": [ 00:16:13.805 "5daad860-2efa-454b-b838-8cb35d0eb3ec" 00:16:13.805 ], 00:16:13.805 "product_name": "Malloc disk", 00:16:13.805 "block_size": 512, 00:16:13.805 "num_blocks": 65536, 00:16:13.805 "uuid": "5daad860-2efa-454b-b838-8cb35d0eb3ec", 00:16:13.805 "assigned_rate_limits": { 00:16:13.805 "rw_ios_per_sec": 0, 00:16:13.805 "rw_mbytes_per_sec": 0, 00:16:13.805 "r_mbytes_per_sec": 0, 00:16:13.805 "w_mbytes_per_sec": 0 00:16:13.805 }, 00:16:13.805 "claimed": true, 00:16:13.805 "claim_type": "exclusive_write", 00:16:13.805 "zoned": false, 00:16:13.805 "supported_io_types": { 00:16:13.805 "read": true, 00:16:13.805 "write": true, 00:16:13.805 "unmap": true, 00:16:13.805 "flush": true, 00:16:13.805 "reset": true, 00:16:13.805 "nvme_admin": false, 00:16:13.805 "nvme_io": false, 00:16:13.805 "nvme_io_md": false, 00:16:13.805 "write_zeroes": true, 00:16:13.805 "zcopy": true, 00:16:13.805 "get_zone_info": false, 00:16:13.805 "zone_management": false, 00:16:13.805 "zone_append": false, 00:16:13.805 "compare": false, 00:16:13.805 "compare_and_write": false, 00:16:13.805 "abort": true, 00:16:13.805 "seek_hole": false, 00:16:13.805 "seek_data": false, 00:16:13.805 "copy": true, 00:16:13.805 "nvme_iov_md": false 00:16:13.805 }, 00:16:13.805 "memory_domains": [ 00:16:13.805 { 00:16:13.805 "dma_device_id": "system", 00:16:13.805 "dma_device_type": 1 00:16:13.805 }, 00:16:13.805 { 00:16:13.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.805 "dma_device_type": 2 00:16:13.805 } 00:16:13.805 ], 00:16:13.805 "driver_specific": {} 00:16:13.805 }' 00:16:13.805 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.805 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.805 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:13.805 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.805 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:14.064 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:14.322 [2024-07-25 18:43:14.820449] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.322 [2024-07-25 18:43:14.820652] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.322 [2024-07-25 18:43:14.820835] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.580 18:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.838 18:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.838 "name": "Existed_Raid", 00:16:14.838 "uuid": "d6218149-baf9-4ee6-98f8-15fff99f7f87", 00:16:14.838 "strip_size_kb": 64, 00:16:14.838 "state": "offline", 00:16:14.838 "raid_level": "concat", 00:16:14.838 "superblock": true, 00:16:14.838 "num_base_bdevs": 2, 00:16:14.838 "num_base_bdevs_discovered": 1, 00:16:14.838 "num_base_bdevs_operational": 1, 00:16:14.838 "base_bdevs_list": [ 00:16:14.838 { 00:16:14.838 "name": null, 00:16:14.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.838 "is_configured": false, 00:16:14.838 "data_offset": 2048, 00:16:14.838 "data_size": 63488 00:16:14.838 }, 00:16:14.838 { 00:16:14.838 "name": "BaseBdev2", 00:16:14.838 "uuid": "5daad860-2efa-454b-b838-8cb35d0eb3ec", 00:16:14.838 "is_configured": true, 00:16:14.838 "data_offset": 2048, 00:16:14.838 "data_size": 63488 00:16:14.838 } 00:16:14.838 ] 00:16:14.838 }' 00:16:14.838 18:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.838 18:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.406 18:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:15.406 18:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:15.406 18:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.406 18:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:15.665 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:15.665 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.665 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:15.665 [2024-07-25 18:43:16.234059] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.665 [2024-07-25 18:43:16.234301] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:16:15.924 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:15.924 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:15.924 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.924 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 121954 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 121954 ']' 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 121954 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121954 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121954' 00:16:16.182 killing process with pid 121954 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 121954 00:16:16.182 [2024-07-25 18:43:16.653828] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.182 18:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 121954 00:16:16.182 [2024-07-25 18:43:16.654055] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.559 ************************************ 00:16:17.559 END TEST raid_state_function_test_sb 00:16:17.559 ************************************ 00:16:17.559 18:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:17.559 00:16:17.559 real 0m10.888s 00:16:17.559 user 0m18.259s 00:16:17.559 sys 0m1.902s 00:16:17.559 18:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:17.559 18:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.559 18:43:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:16:17.559 18:43:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:17.559 18:43:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.559 18:43:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.559 ************************************ 00:16:17.559 START TEST raid_superblock_test 00:16:17.559 ************************************ 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=122325 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 122325 /var/tmp/spdk-raid.sock 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 122325 ']' 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.559 18:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.559 [2024-07-25 18:43:18.001371] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:17.559 [2024-07-25 18:43:18.002590] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122325 ] 00:16:17.818 [2024-07-25 18:43:18.189539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.077 [2024-07-25 18:43:18.406393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.077 [2024-07-25 18:43:18.594756] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.643 18:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:18.643 malloc1 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.904 [2024-07-25 18:43:19.396714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.904 [2024-07-25 18:43:19.397047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.904 [2024-07-25 18:43:19.397146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:18.904 [2024-07-25 18:43:19.397255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.904 [2024-07-25 18:43:19.400048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.904 [2024-07-25 18:43:19.400232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.904 pt1 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.904 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:19.163 malloc2 00:16:19.163 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.421 [2024-07-25 18:43:19.951181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.421 [2024-07-25 18:43:19.951452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.421 [2024-07-25 18:43:19.951526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:19.421 [2024-07-25 18:43:19.951726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.421 [2024-07-25 18:43:19.954411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.421 [2024-07-25 18:43:19.954559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.421 pt2 00:16:19.421 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:19.421 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:19.422 18:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:16:19.680 [2024-07-25 18:43:20.215392] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:19.680 [2024-07-25 18:43:20.217869] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.680 [2024-07-25 18:43:20.218200] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:16:19.680 [2024-07-25 18:43:20.218313] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:19.680 [2024-07-25 18:43:20.218522] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:19.680 [2024-07-25 18:43:20.218996] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:16:19.680 [2024-07-25 18:43:20.219107] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:16:19.680 [2024-07-25 18:43:20.219401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.680 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.939 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.939 "name": "raid_bdev1", 00:16:19.939 "uuid": "05cad971-59a9-4d34-9466-c31b38bbd111", 00:16:19.939 "strip_size_kb": 64, 00:16:19.939 "state": "online", 00:16:19.939 "raid_level": "concat", 00:16:19.939 "superblock": true, 00:16:19.939 "num_base_bdevs": 2, 00:16:19.939 "num_base_bdevs_discovered": 2, 00:16:19.939 "num_base_bdevs_operational": 2, 00:16:19.939 "base_bdevs_list": [ 00:16:19.939 { 00:16:19.939 "name": "pt1", 00:16:19.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.939 "is_configured": true, 00:16:19.939 "data_offset": 2048, 00:16:19.939 "data_size": 63488 00:16:19.939 }, 00:16:19.939 { 00:16:19.939 "name": "pt2", 00:16:19.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.939 "is_configured": true, 00:16:19.939 "data_offset": 2048, 00:16:19.939 "data_size": 63488 00:16:19.939 } 00:16:19.939 ] 00:16:19.939 }' 00:16:19.939 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.939 18:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:20.507 18:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:20.507 [2024-07-25 18:43:21.079770] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.766 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:20.766 "name": "raid_bdev1", 00:16:20.766 "aliases": [ 00:16:20.766 "05cad971-59a9-4d34-9466-c31b38bbd111" 00:16:20.766 ], 00:16:20.766 "product_name": "Raid Volume", 00:16:20.766 "block_size": 512, 00:16:20.766 "num_blocks": 126976, 00:16:20.766 "uuid": "05cad971-59a9-4d34-9466-c31b38bbd111", 00:16:20.766 "assigned_rate_limits": { 00:16:20.766 "rw_ios_per_sec": 0, 00:16:20.766 "rw_mbytes_per_sec": 0, 00:16:20.766 "r_mbytes_per_sec": 0, 00:16:20.766 "w_mbytes_per_sec": 0 00:16:20.766 }, 00:16:20.766 "claimed": false, 00:16:20.766 "zoned": false, 00:16:20.766 "supported_io_types": { 00:16:20.766 "read": true, 00:16:20.766 "write": true, 00:16:20.766 "unmap": true, 00:16:20.766 "flush": true, 00:16:20.766 "reset": true, 00:16:20.766 "nvme_admin": false, 00:16:20.766 "nvme_io": false, 00:16:20.766 "nvme_io_md": false, 00:16:20.766 "write_zeroes": true, 00:16:20.766 "zcopy": false, 00:16:20.767 "get_zone_info": false, 00:16:20.767 "zone_management": false, 00:16:20.767 "zone_append": false, 00:16:20.767 "compare": false, 00:16:20.767 "compare_and_write": false, 00:16:20.767 "abort": false, 00:16:20.767 "seek_hole": false, 00:16:20.767 "seek_data": false, 00:16:20.767 "copy": false, 00:16:20.767 "nvme_iov_md": false 00:16:20.767 }, 00:16:20.767 "memory_domains": [ 00:16:20.767 { 00:16:20.767 "dma_device_id": "system", 00:16:20.767 "dma_device_type": 1 00:16:20.767 }, 00:16:20.767 { 00:16:20.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.767 "dma_device_type": 2 00:16:20.767 }, 00:16:20.767 { 00:16:20.767 "dma_device_id": "system", 00:16:20.767 "dma_device_type": 1 00:16:20.767 }, 00:16:20.767 { 00:16:20.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.767 "dma_device_type": 2 00:16:20.767 } 00:16:20.767 ], 00:16:20.767 "driver_specific": { 00:16:20.767 "raid": { 00:16:20.767 "uuid": "05cad971-59a9-4d34-9466-c31b38bbd111", 00:16:20.767 "strip_size_kb": 64, 00:16:20.767 "state": "online", 00:16:20.767 "raid_level": "concat", 00:16:20.767 "superblock": true, 00:16:20.767 "num_base_bdevs": 2, 00:16:20.767 "num_base_bdevs_discovered": 2, 00:16:20.767 "num_base_bdevs_operational": 2, 00:16:20.767 "base_bdevs_list": [ 00:16:20.767 { 00:16:20.767 "name": "pt1", 00:16:20.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.767 "is_configured": true, 00:16:20.767 "data_offset": 2048, 00:16:20.767 "data_size": 63488 00:16:20.767 }, 00:16:20.767 { 00:16:20.767 "name": "pt2", 00:16:20.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.767 "is_configured": true, 00:16:20.767 "data_offset": 2048, 00:16:20.767 "data_size": 63488 00:16:20.767 } 00:16:20.767 ] 00:16:20.767 } 00:16:20.767 } 00:16:20.767 }' 00:16:20.767 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.767 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:20.767 pt2' 00:16:20.767 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:20.767 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:20.767 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:20.767 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:20.767 "name": "pt1", 00:16:20.767 "aliases": [ 00:16:20.767 "00000000-0000-0000-0000-000000000001" 00:16:20.767 ], 00:16:20.767 "product_name": "passthru", 00:16:20.767 "block_size": 512, 00:16:20.767 "num_blocks": 65536, 00:16:20.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.767 "assigned_rate_limits": { 00:16:20.767 "rw_ios_per_sec": 0, 00:16:20.767 "rw_mbytes_per_sec": 0, 00:16:20.767 "r_mbytes_per_sec": 0, 00:16:20.767 "w_mbytes_per_sec": 0 00:16:20.767 }, 00:16:20.767 "claimed": true, 00:16:20.767 "claim_type": "exclusive_write", 00:16:20.767 "zoned": false, 00:16:20.767 "supported_io_types": { 00:16:20.767 "read": true, 00:16:20.767 "write": true, 00:16:20.767 "unmap": true, 00:16:20.767 "flush": true, 00:16:20.767 "reset": true, 00:16:20.767 "nvme_admin": false, 00:16:20.767 "nvme_io": false, 00:16:20.767 "nvme_io_md": false, 00:16:20.767 "write_zeroes": true, 00:16:20.767 "zcopy": true, 00:16:20.767 "get_zone_info": false, 00:16:20.767 "zone_management": false, 00:16:20.767 "zone_append": false, 00:16:20.767 "compare": false, 00:16:20.767 "compare_and_write": false, 00:16:20.767 "abort": true, 00:16:20.767 "seek_hole": false, 00:16:20.767 "seek_data": false, 00:16:20.767 "copy": true, 00:16:20.767 "nvme_iov_md": false 00:16:20.767 }, 00:16:20.767 "memory_domains": [ 00:16:20.767 { 00:16:20.767 "dma_device_id": "system", 00:16:20.767 "dma_device_type": 1 00:16:20.767 }, 00:16:20.767 { 00:16:20.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.767 "dma_device_type": 2 00:16:20.767 } 00:16:20.767 ], 00:16:20.767 "driver_specific": { 00:16:20.767 "passthru": { 00:16:20.767 "name": "pt1", 00:16:20.767 "base_bdev_name": "malloc1" 00:16:20.767 } 00:16:20.767 } 00:16:20.767 }' 00:16:20.767 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.026 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.284 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:21.284 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:21.284 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:21.284 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:21.284 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:21.284 "name": "pt2", 00:16:21.284 "aliases": [ 00:16:21.284 "00000000-0000-0000-0000-000000000002" 00:16:21.284 ], 00:16:21.284 "product_name": "passthru", 00:16:21.284 "block_size": 512, 00:16:21.284 "num_blocks": 65536, 00:16:21.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.284 "assigned_rate_limits": { 00:16:21.284 "rw_ios_per_sec": 0, 00:16:21.284 "rw_mbytes_per_sec": 0, 00:16:21.284 "r_mbytes_per_sec": 0, 00:16:21.284 "w_mbytes_per_sec": 0 00:16:21.284 }, 00:16:21.284 "claimed": true, 00:16:21.284 "claim_type": "exclusive_write", 00:16:21.284 "zoned": false, 00:16:21.284 "supported_io_types": { 00:16:21.284 "read": true, 00:16:21.284 "write": true, 00:16:21.284 "unmap": true, 00:16:21.284 "flush": true, 00:16:21.284 "reset": true, 00:16:21.284 "nvme_admin": false, 00:16:21.284 "nvme_io": false, 00:16:21.284 "nvme_io_md": false, 00:16:21.285 "write_zeroes": true, 00:16:21.285 "zcopy": true, 00:16:21.285 "get_zone_info": false, 00:16:21.285 "zone_management": false, 00:16:21.285 "zone_append": false, 00:16:21.285 "compare": false, 00:16:21.285 "compare_and_write": false, 00:16:21.285 "abort": true, 00:16:21.285 "seek_hole": false, 00:16:21.285 "seek_data": false, 00:16:21.285 "copy": true, 00:16:21.285 "nvme_iov_md": false 00:16:21.285 }, 00:16:21.285 "memory_domains": [ 00:16:21.285 { 00:16:21.285 "dma_device_id": "system", 00:16:21.285 "dma_device_type": 1 00:16:21.285 }, 00:16:21.285 { 00:16:21.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.285 "dma_device_type": 2 00:16:21.285 } 00:16:21.285 ], 00:16:21.285 "driver_specific": { 00:16:21.285 "passthru": { 00:16:21.285 "name": "pt2", 00:16:21.285 "base_bdev_name": "malloc2" 00:16:21.285 } 00:16:21.285 } 00:16:21.285 }' 00:16:21.285 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.285 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.543 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:21.543 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.543 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.543 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:21.543 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.543 18:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.543 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:21.543 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.543 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.543 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:21.802 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:21.802 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:21.802 [2024-07-25 18:43:22.351992] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.802 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=05cad971-59a9-4d34-9466-c31b38bbd111 00:16:21.802 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 05cad971-59a9-4d34-9466-c31b38bbd111 ']' 00:16:21.802 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:22.061 [2024-07-25 18:43:22.599788] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.061 [2024-07-25 18:43:22.600000] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.061 [2024-07-25 18:43:22.600216] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.061 [2024-07-25 18:43:22.600370] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.061 [2024-07-25 18:43:22.600450] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:16:22.061 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.061 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:22.320 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:22.320 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:22.320 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.320 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:22.581 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.581 18:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:22.581 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:22.581 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:22.876 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:23.157 [2024-07-25 18:43:23.487969] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:23.157 [2024-07-25 18:43:23.490391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:23.157 [2024-07-25 18:43:23.490585] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:23.157 [2024-07-25 18:43:23.490778] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:23.157 [2024-07-25 18:43:23.490895] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.157 [2024-07-25 18:43:23.490929] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:16:23.157 request: 00:16:23.157 { 00:16:23.157 "name": "raid_bdev1", 00:16:23.157 "raid_level": "concat", 00:16:23.157 "base_bdevs": [ 00:16:23.157 "malloc1", 00:16:23.157 "malloc2" 00:16:23.157 ], 00:16:23.157 "strip_size_kb": 64, 00:16:23.157 "superblock": false, 00:16:23.157 "method": "bdev_raid_create", 00:16:23.157 "req_id": 1 00:16:23.157 } 00:16:23.157 Got JSON-RPC error response 00:16:23.157 response: 00:16:23.157 { 00:16:23.157 "code": -17, 00:16:23.157 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:23.157 } 00:16:23.157 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:23.157 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.157 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.157 18:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.157 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.157 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.415 [2024-07-25 18:43:23.922860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.415 [2024-07-25 18:43:23.923398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.415 [2024-07-25 18:43:23.923639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:23.415 [2024-07-25 18:43:23.923859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.415 [2024-07-25 18:43:23.926720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.415 [2024-07-25 18:43:23.927020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.415 [2024-07-25 18:43:23.927367] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:23.415 [2024-07-25 18:43:23.927520] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.415 pt1 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.415 18:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.982 18:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.982 "name": "raid_bdev1", 00:16:23.982 "uuid": "05cad971-59a9-4d34-9466-c31b38bbd111", 00:16:23.982 "strip_size_kb": 64, 00:16:23.982 "state": "configuring", 00:16:23.982 "raid_level": "concat", 00:16:23.982 "superblock": true, 00:16:23.982 "num_base_bdevs": 2, 00:16:23.982 "num_base_bdevs_discovered": 1, 00:16:23.982 "num_base_bdevs_operational": 2, 00:16:23.982 "base_bdevs_list": [ 00:16:23.982 { 00:16:23.982 "name": "pt1", 00:16:23.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.982 "is_configured": true, 00:16:23.982 "data_offset": 2048, 00:16:23.982 "data_size": 63488 00:16:23.982 }, 00:16:23.982 { 00:16:23.982 "name": null, 00:16:23.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.982 "is_configured": false, 00:16:23.982 "data_offset": 2048, 00:16:23.982 "data_size": 63488 00:16:23.982 } 00:16:23.982 ] 00:16:23.982 }' 00:16:23.982 18:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.982 18:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.549 18:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:16:24.549 18:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:24.549 18:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:24.549 18:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:24.549 [2024-07-25 18:43:25.122198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:24.549 [2024-07-25 18:43:25.122485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.549 [2024-07-25 18:43:25.122561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:24.549 [2024-07-25 18:43:25.122821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.549 [2024-07-25 18:43:25.123414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.549 [2024-07-25 18:43:25.123917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:24.549 [2024-07-25 18:43:25.124220] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:24.549 [2024-07-25 18:43:25.124339] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.549 [2024-07-25 18:43:25.124523] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:24.549 [2024-07-25 18:43:25.124842] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:24.808 [2024-07-25 18:43:25.124979] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:24.808 [2024-07-25 18:43:25.125384] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:24.808 [2024-07-25 18:43:25.125574] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:16:24.808 [2024-07-25 18:43:25.125750] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.808 pt2 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.808 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.067 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.067 "name": "raid_bdev1", 00:16:25.067 "uuid": "05cad971-59a9-4d34-9466-c31b38bbd111", 00:16:25.067 "strip_size_kb": 64, 00:16:25.067 "state": "online", 00:16:25.067 "raid_level": "concat", 00:16:25.067 "superblock": true, 00:16:25.067 "num_base_bdevs": 2, 00:16:25.067 "num_base_bdevs_discovered": 2, 00:16:25.067 "num_base_bdevs_operational": 2, 00:16:25.067 "base_bdevs_list": [ 00:16:25.067 { 00:16:25.067 "name": "pt1", 00:16:25.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.067 "is_configured": true, 00:16:25.067 "data_offset": 2048, 00:16:25.067 "data_size": 63488 00:16:25.067 }, 00:16:25.067 { 00:16:25.067 "name": "pt2", 00:16:25.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.067 "is_configured": true, 00:16:25.067 "data_offset": 2048, 00:16:25.067 "data_size": 63488 00:16:25.067 } 00:16:25.067 ] 00:16:25.067 }' 00:16:25.067 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.067 18:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.634 18:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:25.893 [2024-07-25 18:43:26.226610] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.894 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:25.894 "name": "raid_bdev1", 00:16:25.894 "aliases": [ 00:16:25.894 "05cad971-59a9-4d34-9466-c31b38bbd111" 00:16:25.894 ], 00:16:25.894 "product_name": "Raid Volume", 00:16:25.894 "block_size": 512, 00:16:25.894 "num_blocks": 126976, 00:16:25.894 "uuid": "05cad971-59a9-4d34-9466-c31b38bbd111", 00:16:25.894 "assigned_rate_limits": { 00:16:25.894 "rw_ios_per_sec": 0, 00:16:25.894 "rw_mbytes_per_sec": 0, 00:16:25.894 "r_mbytes_per_sec": 0, 00:16:25.894 "w_mbytes_per_sec": 0 00:16:25.894 }, 00:16:25.894 "claimed": false, 00:16:25.894 "zoned": false, 00:16:25.894 "supported_io_types": { 00:16:25.894 "read": true, 00:16:25.894 "write": true, 00:16:25.894 "unmap": true, 00:16:25.894 "flush": true, 00:16:25.894 "reset": true, 00:16:25.894 "nvme_admin": false, 00:16:25.894 "nvme_io": false, 00:16:25.894 "nvme_io_md": false, 00:16:25.894 "write_zeroes": true, 00:16:25.894 "zcopy": false, 00:16:25.894 "get_zone_info": false, 00:16:25.894 "zone_management": false, 00:16:25.894 "zone_append": false, 00:16:25.894 "compare": false, 00:16:25.894 "compare_and_write": false, 00:16:25.894 "abort": false, 00:16:25.894 "seek_hole": false, 00:16:25.894 "seek_data": false, 00:16:25.894 "copy": false, 00:16:25.894 "nvme_iov_md": false 00:16:25.894 }, 00:16:25.894 "memory_domains": [ 00:16:25.894 { 00:16:25.894 "dma_device_id": "system", 00:16:25.894 "dma_device_type": 1 00:16:25.894 }, 00:16:25.894 { 00:16:25.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.894 "dma_device_type": 2 00:16:25.894 }, 00:16:25.894 { 00:16:25.894 "dma_device_id": "system", 00:16:25.894 "dma_device_type": 1 00:16:25.894 }, 00:16:25.894 { 00:16:25.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.894 "dma_device_type": 2 00:16:25.894 } 00:16:25.894 ], 00:16:25.894 "driver_specific": { 00:16:25.894 "raid": { 00:16:25.894 "uuid": "05cad971-59a9-4d34-9466-c31b38bbd111", 00:16:25.894 "strip_size_kb": 64, 00:16:25.894 "state": "online", 00:16:25.894 "raid_level": "concat", 00:16:25.894 "superblock": true, 00:16:25.894 "num_base_bdevs": 2, 00:16:25.894 "num_base_bdevs_discovered": 2, 00:16:25.894 "num_base_bdevs_operational": 2, 00:16:25.894 "base_bdevs_list": [ 00:16:25.894 { 00:16:25.894 "name": "pt1", 00:16:25.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.894 "is_configured": true, 00:16:25.894 "data_offset": 2048, 00:16:25.894 "data_size": 63488 00:16:25.894 }, 00:16:25.894 { 00:16:25.894 "name": "pt2", 00:16:25.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.894 "is_configured": true, 00:16:25.894 "data_offset": 2048, 00:16:25.894 "data_size": 63488 00:16:25.894 } 00:16:25.894 ] 00:16:25.894 } 00:16:25.894 } 00:16:25.894 }' 00:16:25.894 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.894 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:25.894 pt2' 00:16:25.894 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.894 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.894 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:26.152 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.152 "name": "pt1", 00:16:26.152 "aliases": [ 00:16:26.152 "00000000-0000-0000-0000-000000000001" 00:16:26.152 ], 00:16:26.152 "product_name": "passthru", 00:16:26.152 "block_size": 512, 00:16:26.152 "num_blocks": 65536, 00:16:26.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.152 "assigned_rate_limits": { 00:16:26.152 "rw_ios_per_sec": 0, 00:16:26.153 "rw_mbytes_per_sec": 0, 00:16:26.153 "r_mbytes_per_sec": 0, 00:16:26.153 "w_mbytes_per_sec": 0 00:16:26.153 }, 00:16:26.153 "claimed": true, 00:16:26.153 "claim_type": "exclusive_write", 00:16:26.153 "zoned": false, 00:16:26.153 "supported_io_types": { 00:16:26.153 "read": true, 00:16:26.153 "write": true, 00:16:26.153 "unmap": true, 00:16:26.153 "flush": true, 00:16:26.153 "reset": true, 00:16:26.153 "nvme_admin": false, 00:16:26.153 "nvme_io": false, 00:16:26.153 "nvme_io_md": false, 00:16:26.153 "write_zeroes": true, 00:16:26.153 "zcopy": true, 00:16:26.153 "get_zone_info": false, 00:16:26.153 "zone_management": false, 00:16:26.153 "zone_append": false, 00:16:26.153 "compare": false, 00:16:26.153 "compare_and_write": false, 00:16:26.153 "abort": true, 00:16:26.153 "seek_hole": false, 00:16:26.153 "seek_data": false, 00:16:26.153 "copy": true, 00:16:26.153 "nvme_iov_md": false 00:16:26.153 }, 00:16:26.153 "memory_domains": [ 00:16:26.153 { 00:16:26.153 "dma_device_id": "system", 00:16:26.153 "dma_device_type": 1 00:16:26.153 }, 00:16:26.153 { 00:16:26.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.153 "dma_device_type": 2 00:16:26.153 } 00:16:26.153 ], 00:16:26.153 "driver_specific": { 00:16:26.153 "passthru": { 00:16:26.153 "name": "pt1", 00:16:26.153 "base_bdev_name": "malloc1" 00:16:26.153 } 00:16:26.153 } 00:16:26.153 }' 00:16:26.153 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.153 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.153 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.153 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.153 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:26.410 18:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.667 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.667 "name": "pt2", 00:16:26.667 "aliases": [ 00:16:26.667 "00000000-0000-0000-0000-000000000002" 00:16:26.667 ], 00:16:26.667 "product_name": "passthru", 00:16:26.667 "block_size": 512, 00:16:26.667 "num_blocks": 65536, 00:16:26.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.667 "assigned_rate_limits": { 00:16:26.667 "rw_ios_per_sec": 0, 00:16:26.667 "rw_mbytes_per_sec": 0, 00:16:26.667 "r_mbytes_per_sec": 0, 00:16:26.667 "w_mbytes_per_sec": 0 00:16:26.667 }, 00:16:26.667 "claimed": true, 00:16:26.667 "claim_type": "exclusive_write", 00:16:26.667 "zoned": false, 00:16:26.667 "supported_io_types": { 00:16:26.667 "read": true, 00:16:26.667 "write": true, 00:16:26.667 "unmap": true, 00:16:26.667 "flush": true, 00:16:26.667 "reset": true, 00:16:26.667 "nvme_admin": false, 00:16:26.667 "nvme_io": false, 00:16:26.667 "nvme_io_md": false, 00:16:26.667 "write_zeroes": true, 00:16:26.667 "zcopy": true, 00:16:26.667 "get_zone_info": false, 00:16:26.667 "zone_management": false, 00:16:26.667 "zone_append": false, 00:16:26.667 "compare": false, 00:16:26.667 "compare_and_write": false, 00:16:26.667 "abort": true, 00:16:26.667 "seek_hole": false, 00:16:26.667 "seek_data": false, 00:16:26.667 "copy": true, 00:16:26.667 "nvme_iov_md": false 00:16:26.667 }, 00:16:26.667 "memory_domains": [ 00:16:26.667 { 00:16:26.667 "dma_device_id": "system", 00:16:26.667 "dma_device_type": 1 00:16:26.667 }, 00:16:26.667 { 00:16:26.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.668 "dma_device_type": 2 00:16:26.668 } 00:16:26.668 ], 00:16:26.668 "driver_specific": { 00:16:26.668 "passthru": { 00:16:26.668 "name": "pt2", 00:16:26.668 "base_bdev_name": "malloc2" 00:16:26.668 } 00:16:26.668 } 00:16:26.668 }' 00:16:26.668 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.668 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.668 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.668 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.668 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:26.925 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:27.183 [2024-07-25 18:43:27.718926] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 05cad971-59a9-4d34-9466-c31b38bbd111 '!=' 05cad971-59a9-4d34-9466-c31b38bbd111 ']' 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 122325 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 122325 ']' 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 122325 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.183 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122325 00:16:27.441 killing process with pid 122325 00:16:27.441 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.441 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.441 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122325' 00:16:27.441 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 122325 00:16:27.441 18:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 122325 00:16:27.441 [2024-07-25 18:43:27.768748] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.441 [2024-07-25 18:43:27.768842] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.441 [2024-07-25 18:43:27.768897] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.441 [2024-07-25 18:43:27.768906] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:16:27.441 [2024-07-25 18:43:27.930320] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.814 ************************************ 00:16:28.814 END TEST raid_superblock_test 00:16:28.814 ************************************ 00:16:28.814 18:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:28.814 00:16:28.814 real 0m11.174s 00:16:28.814 user 0m18.864s 00:16:28.814 sys 0m1.942s 00:16:28.814 18:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.814 18:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.814 18:43:29 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:16:28.814 18:43:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:28.814 18:43:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.814 18:43:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.814 ************************************ 00:16:28.814 START TEST raid_read_error_test 00:16:28.814 ************************************ 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:28.814 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.CSXZUVSd87 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=122697 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 122697 /var/tmp/spdk-raid.sock 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 122697 ']' 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.815 18:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:28.815 [2024-07-25 18:43:29.256231] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:28.815 [2024-07-25 18:43:29.256674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122697 ] 00:16:29.073 [2024-07-25 18:43:29.433541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.331 [2024-07-25 18:43:29.675181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.589 [2024-07-25 18:43:29.935737] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.847 18:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.847 18:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:29.847 18:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:29.847 18:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:29.847 BaseBdev1_malloc 00:16:29.847 18:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:30.105 true 00:16:30.105 18:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:30.363 [2024-07-25 18:43:30.861432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:30.363 [2024-07-25 18:43:30.861742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.363 [2024-07-25 18:43:30.861837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:30.363 [2024-07-25 18:43:30.861947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.363 [2024-07-25 18:43:30.864688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.363 [2024-07-25 18:43:30.864860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.363 BaseBdev1 00:16:30.363 18:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:30.363 18:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:30.621 BaseBdev2_malloc 00:16:30.621 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:30.878 true 00:16:30.878 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:31.136 [2024-07-25 18:43:31.547559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:31.136 [2024-07-25 18:43:31.547883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.136 [2024-07-25 18:43:31.547963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:31.136 [2024-07-25 18:43:31.548131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.136 [2024-07-25 18:43:31.550824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.136 [2024-07-25 18:43:31.550984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:31.136 BaseBdev2 00:16:31.136 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:31.393 [2024-07-25 18:43:31.719743] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.393 [2024-07-25 18:43:31.722205] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.393 [2024-07-25 18:43:31.722545] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:31.393 [2024-07-25 18:43:31.722686] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:31.393 [2024-07-25 18:43:31.722864] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:31.393 [2024-07-25 18:43:31.723318] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:31.393 [2024-07-25 18:43:31.723357] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:16:31.393 [2024-07-25 18:43:31.723688] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.393 "name": "raid_bdev1", 00:16:31.393 "uuid": "9b2953a8-c682-43ed-b972-8e7d1c300f3e", 00:16:31.393 "strip_size_kb": 64, 00:16:31.393 "state": "online", 00:16:31.393 "raid_level": "concat", 00:16:31.393 "superblock": true, 00:16:31.393 "num_base_bdevs": 2, 00:16:31.393 "num_base_bdevs_discovered": 2, 00:16:31.393 "num_base_bdevs_operational": 2, 00:16:31.393 "base_bdevs_list": [ 00:16:31.393 { 00:16:31.393 "name": "BaseBdev1", 00:16:31.393 "uuid": "ce7dafcc-3b55-579d-be9c-0c13d6e739e5", 00:16:31.393 "is_configured": true, 00:16:31.393 "data_offset": 2048, 00:16:31.393 "data_size": 63488 00:16:31.393 }, 00:16:31.393 { 00:16:31.393 "name": "BaseBdev2", 00:16:31.393 "uuid": "a8ec136a-f0df-5f34-a7c4-979b28b2c469", 00:16:31.393 "is_configured": true, 00:16:31.393 "data_offset": 2048, 00:16:31.393 "data_size": 63488 00:16:31.393 } 00:16:31.393 ] 00:16:31.393 }' 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.393 18:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.958 18:43:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:31.958 18:43:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:32.216 [2024-07-25 18:43:32.561388] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.150 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.408 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.408 "name": "raid_bdev1", 00:16:33.408 "uuid": "9b2953a8-c682-43ed-b972-8e7d1c300f3e", 00:16:33.408 "strip_size_kb": 64, 00:16:33.408 "state": "online", 00:16:33.408 "raid_level": "concat", 00:16:33.408 "superblock": true, 00:16:33.408 "num_base_bdevs": 2, 00:16:33.408 "num_base_bdevs_discovered": 2, 00:16:33.408 "num_base_bdevs_operational": 2, 00:16:33.408 "base_bdevs_list": [ 00:16:33.408 { 00:16:33.408 "name": "BaseBdev1", 00:16:33.408 "uuid": "ce7dafcc-3b55-579d-be9c-0c13d6e739e5", 00:16:33.408 "is_configured": true, 00:16:33.408 "data_offset": 2048, 00:16:33.408 "data_size": 63488 00:16:33.408 }, 00:16:33.408 { 00:16:33.408 "name": "BaseBdev2", 00:16:33.408 "uuid": "a8ec136a-f0df-5f34-a7c4-979b28b2c469", 00:16:33.408 "is_configured": true, 00:16:33.408 "data_offset": 2048, 00:16:33.408 "data_size": 63488 00:16:33.408 } 00:16:33.408 ] 00:16:33.408 }' 00:16:33.408 18:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.408 18:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.974 18:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:34.232 [2024-07-25 18:43:34.588011] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.232 [2024-07-25 18:43:34.588324] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.232 [2024-07-25 18:43:34.591115] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.232 [2024-07-25 18:43:34.591267] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.232 [2024-07-25 18:43:34.591334] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.232 [2024-07-25 18:43:34.591535] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:16:34.232 0 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 122697 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 122697 ']' 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 122697 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122697 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122697' 00:16:34.232 killing process with pid 122697 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 122697 00:16:34.232 18:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 122697 00:16:34.232 [2024-07-25 18:43:34.637456] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.232 [2024-07-25 18:43:34.783337] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.CSXZUVSd87 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:36.132 ************************************ 00:16:36.132 END TEST raid_read_error_test 00:16:36.132 ************************************ 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:16:36.132 00:16:36.132 real 0m7.152s 00:16:36.132 user 0m10.008s 00:16:36.132 sys 0m1.097s 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.132 18:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.132 18:43:36 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:16:36.132 18:43:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:36.132 18:43:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.132 18:43:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.132 ************************************ 00:16:36.132 START TEST raid_write_error_test 00:16:36.132 ************************************ 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.BrzfBRk7Xy 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=122889 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 122889 /var/tmp/spdk-raid.sock 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 122889 ']' 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:36.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.132 18:43:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.132 [2024-07-25 18:43:36.474523] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:36.132 [2024-07-25 18:43:36.475684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122889 ] 00:16:36.132 [2024-07-25 18:43:36.662081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.390 [2024-07-25 18:43:36.897603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.649 [2024-07-25 18:43:37.163803] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.907 18:43:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.907 18:43:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:36.907 18:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:36.907 18:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:37.165 BaseBdev1_malloc 00:16:37.165 18:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:37.165 true 00:16:37.165 18:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:37.423 [2024-07-25 18:43:37.873731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:37.423 [2024-07-25 18:43:37.874096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.423 [2024-07-25 18:43:37.874175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:37.423 [2024-07-25 18:43:37.874290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.423 [2024-07-25 18:43:37.877011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.423 [2024-07-25 18:43:37.877180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:37.423 BaseBdev1 00:16:37.423 18:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:37.423 18:43:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:37.681 BaseBdev2_malloc 00:16:37.681 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:37.938 true 00:16:37.938 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:38.196 [2024-07-25 18:43:38.675640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:38.196 [2024-07-25 18:43:38.675949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.196 [2024-07-25 18:43:38.676034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:38.196 [2024-07-25 18:43:38.676266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.196 [2024-07-25 18:43:38.678959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.196 [2024-07-25 18:43:38.679168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:38.196 BaseBdev2 00:16:38.196 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:38.454 [2024-07-25 18:43:38.851747] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.454 [2024-07-25 18:43:38.854146] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.454 [2024-07-25 18:43:38.854501] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:38.454 [2024-07-25 18:43:38.854613] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:38.454 [2024-07-25 18:43:38.854789] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:38.454 [2024-07-25 18:43:38.855237] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:38.454 [2024-07-25 18:43:38.855279] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:16:38.454 [2024-07-25 18:43:38.855688] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.454 18:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.711 18:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.711 "name": "raid_bdev1", 00:16:38.711 "uuid": "644f8075-15fe-4fbd-954c-3c56d558002d", 00:16:38.711 "strip_size_kb": 64, 00:16:38.711 "state": "online", 00:16:38.711 "raid_level": "concat", 00:16:38.711 "superblock": true, 00:16:38.711 "num_base_bdevs": 2, 00:16:38.711 "num_base_bdevs_discovered": 2, 00:16:38.711 "num_base_bdevs_operational": 2, 00:16:38.711 "base_bdevs_list": [ 00:16:38.711 { 00:16:38.711 "name": "BaseBdev1", 00:16:38.711 "uuid": "22a62775-f13d-56a4-a58c-d7e50a87a834", 00:16:38.711 "is_configured": true, 00:16:38.711 "data_offset": 2048, 00:16:38.711 "data_size": 63488 00:16:38.711 }, 00:16:38.711 { 00:16:38.711 "name": "BaseBdev2", 00:16:38.711 "uuid": "06fa76e6-e0a0-532e-99ed-4773f9824cad", 00:16:38.711 "is_configured": true, 00:16:38.711 "data_offset": 2048, 00:16:38.711 "data_size": 63488 00:16:38.711 } 00:16:38.711 ] 00:16:38.711 }' 00:16:38.711 18:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.711 18:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.276 18:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:39.276 18:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:39.276 [2024-07-25 18:43:39.717501] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:40.248 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.516 18:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.775 18:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.775 "name": "raid_bdev1", 00:16:40.775 "uuid": "644f8075-15fe-4fbd-954c-3c56d558002d", 00:16:40.775 "strip_size_kb": 64, 00:16:40.775 "state": "online", 00:16:40.775 "raid_level": "concat", 00:16:40.775 "superblock": true, 00:16:40.775 "num_base_bdevs": 2, 00:16:40.775 "num_base_bdevs_discovered": 2, 00:16:40.775 "num_base_bdevs_operational": 2, 00:16:40.775 "base_bdevs_list": [ 00:16:40.775 { 00:16:40.775 "name": "BaseBdev1", 00:16:40.775 "uuid": "22a62775-f13d-56a4-a58c-d7e50a87a834", 00:16:40.775 "is_configured": true, 00:16:40.775 "data_offset": 2048, 00:16:40.775 "data_size": 63488 00:16:40.775 }, 00:16:40.775 { 00:16:40.775 "name": "BaseBdev2", 00:16:40.775 "uuid": "06fa76e6-e0a0-532e-99ed-4773f9824cad", 00:16:40.775 "is_configured": true, 00:16:40.775 "data_offset": 2048, 00:16:40.775 "data_size": 63488 00:16:40.775 } 00:16:40.775 ] 00:16:40.775 }' 00:16:40.775 18:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.775 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.342 18:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:41.601 [2024-07-25 18:43:41.937219] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.601 [2024-07-25 18:43:41.937518] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.601 [2024-07-25 18:43:41.940293] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.601 [2024-07-25 18:43:41.940456] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.601 [2024-07-25 18:43:41.940525] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.601 [2024-07-25 18:43:41.940597] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:16:41.601 0 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 122889 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 122889 ']' 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 122889 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 122889 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 122889' 00:16:41.601 killing process with pid 122889 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 122889 00:16:41.601 [2024-07-25 18:43:41.991551] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.601 18:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 122889 00:16:41.601 [2024-07-25 18:43:42.132988] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.BrzfBRk7Xy 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:43.503 ************************************ 00:16:43.503 END TEST raid_write_error_test 00:16:43.503 ************************************ 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.45 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.45 != \0\.\0\0 ]] 00:16:43.503 00:16:43.503 real 0m7.284s 00:16:43.503 user 0m10.216s 00:16:43.503 sys 0m1.132s 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.503 18:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.503 18:43:43 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:43.503 18:43:43 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:43.503 18:43:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:43.503 18:43:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.503 18:43:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.503 ************************************ 00:16:43.503 START TEST raid_state_function_test 00:16:43.503 ************************************ 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:43.503 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=123075 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123075' 00:16:43.504 Process raid pid: 123075 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 123075 /var/tmp/spdk-raid.sock 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 123075 ']' 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:43.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.504 18:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.504 [2024-07-25 18:43:43.833877] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:43.504 [2024-07-25 18:43:43.834268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.504 [2024-07-25 18:43:44.018736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.762 [2024-07-25 18:43:44.213894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.021 [2024-07-25 18:43:44.403482] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.279 18:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.279 18:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:44.279 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:44.538 [2024-07-25 18:43:44.921726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.538 [2024-07-25 18:43:44.922050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.538 [2024-07-25 18:43:44.922156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.538 [2024-07-25 18:43:44.922223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.538 18:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.796 18:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.796 "name": "Existed_Raid", 00:16:44.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.796 "strip_size_kb": 0, 00:16:44.796 "state": "configuring", 00:16:44.796 "raid_level": "raid1", 00:16:44.796 "superblock": false, 00:16:44.796 "num_base_bdevs": 2, 00:16:44.796 "num_base_bdevs_discovered": 0, 00:16:44.796 "num_base_bdevs_operational": 2, 00:16:44.796 "base_bdevs_list": [ 00:16:44.796 { 00:16:44.796 "name": "BaseBdev1", 00:16:44.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.796 "is_configured": false, 00:16:44.796 "data_offset": 0, 00:16:44.796 "data_size": 0 00:16:44.796 }, 00:16:44.796 { 00:16:44.796 "name": "BaseBdev2", 00:16:44.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.796 "is_configured": false, 00:16:44.796 "data_offset": 0, 00:16:44.796 "data_size": 0 00:16:44.796 } 00:16:44.796 ] 00:16:44.796 }' 00:16:44.796 18:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.796 18:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.364 18:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:45.364 [2024-07-25 18:43:45.885805] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.364 [2024-07-25 18:43:45.886042] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:16:45.364 18:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:45.623 [2024-07-25 18:43:46.165912] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.623 [2024-07-25 18:43:46.166108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.623 [2024-07-25 18:43:46.166199] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.623 [2024-07-25 18:43:46.166262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.623 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.881 [2024-07-25 18:43:46.456464] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.140 BaseBdev1 00:16:46.140 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:46.140 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:46.140 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:46.140 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:46.140 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:46.140 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:46.140 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.399 [ 00:16:46.399 { 00:16:46.399 "name": "BaseBdev1", 00:16:46.399 "aliases": [ 00:16:46.399 "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6" 00:16:46.399 ], 00:16:46.399 "product_name": "Malloc disk", 00:16:46.399 "block_size": 512, 00:16:46.399 "num_blocks": 65536, 00:16:46.399 "uuid": "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6", 00:16:46.399 "assigned_rate_limits": { 00:16:46.399 "rw_ios_per_sec": 0, 00:16:46.399 "rw_mbytes_per_sec": 0, 00:16:46.399 "r_mbytes_per_sec": 0, 00:16:46.399 "w_mbytes_per_sec": 0 00:16:46.399 }, 00:16:46.399 "claimed": true, 00:16:46.399 "claim_type": "exclusive_write", 00:16:46.399 "zoned": false, 00:16:46.399 "supported_io_types": { 00:16:46.399 "read": true, 00:16:46.399 "write": true, 00:16:46.399 "unmap": true, 00:16:46.399 "flush": true, 00:16:46.399 "reset": true, 00:16:46.399 "nvme_admin": false, 00:16:46.399 "nvme_io": false, 00:16:46.399 "nvme_io_md": false, 00:16:46.399 "write_zeroes": true, 00:16:46.399 "zcopy": true, 00:16:46.399 "get_zone_info": false, 00:16:46.399 "zone_management": false, 00:16:46.399 "zone_append": false, 00:16:46.399 "compare": false, 00:16:46.399 "compare_and_write": false, 00:16:46.399 "abort": true, 00:16:46.399 "seek_hole": false, 00:16:46.399 "seek_data": false, 00:16:46.399 "copy": true, 00:16:46.399 "nvme_iov_md": false 00:16:46.399 }, 00:16:46.399 "memory_domains": [ 00:16:46.399 { 00:16:46.399 "dma_device_id": "system", 00:16:46.399 "dma_device_type": 1 00:16:46.399 }, 00:16:46.399 { 00:16:46.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.399 "dma_device_type": 2 00:16:46.399 } 00:16:46.399 ], 00:16:46.399 "driver_specific": {} 00:16:46.399 } 00:16:46.399 ] 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.399 18:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.660 18:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.660 "name": "Existed_Raid", 00:16:46.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.660 "strip_size_kb": 0, 00:16:46.660 "state": "configuring", 00:16:46.660 "raid_level": "raid1", 00:16:46.660 "superblock": false, 00:16:46.660 "num_base_bdevs": 2, 00:16:46.660 "num_base_bdevs_discovered": 1, 00:16:46.660 "num_base_bdevs_operational": 2, 00:16:46.660 "base_bdevs_list": [ 00:16:46.660 { 00:16:46.660 "name": "BaseBdev1", 00:16:46.660 "uuid": "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6", 00:16:46.660 "is_configured": true, 00:16:46.660 "data_offset": 0, 00:16:46.660 "data_size": 65536 00:16:46.660 }, 00:16:46.660 { 00:16:46.660 "name": "BaseBdev2", 00:16:46.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.660 "is_configured": false, 00:16:46.660 "data_offset": 0, 00:16:46.660 "data_size": 0 00:16:46.660 } 00:16:46.660 ] 00:16:46.660 }' 00:16:46.660 18:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.660 18:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.228 18:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.487 [2024-07-25 18:43:47.900786] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.487 [2024-07-25 18:43:47.901017] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:16:47.487 18:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:47.747 [2024-07-25 18:43:48.156823] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.747 [2024-07-25 18:43:48.159177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.747 [2024-07-25 18:43:48.159358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.747 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.006 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.006 "name": "Existed_Raid", 00:16:48.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.006 "strip_size_kb": 0, 00:16:48.006 "state": "configuring", 00:16:48.006 "raid_level": "raid1", 00:16:48.006 "superblock": false, 00:16:48.006 "num_base_bdevs": 2, 00:16:48.006 "num_base_bdevs_discovered": 1, 00:16:48.006 "num_base_bdevs_operational": 2, 00:16:48.006 "base_bdevs_list": [ 00:16:48.006 { 00:16:48.006 "name": "BaseBdev1", 00:16:48.006 "uuid": "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6", 00:16:48.006 "is_configured": true, 00:16:48.006 "data_offset": 0, 00:16:48.006 "data_size": 65536 00:16:48.006 }, 00:16:48.006 { 00:16:48.006 "name": "BaseBdev2", 00:16:48.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.006 "is_configured": false, 00:16:48.006 "data_offset": 0, 00:16:48.006 "data_size": 0 00:16:48.006 } 00:16:48.006 ] 00:16:48.006 }' 00:16:48.006 18:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.006 18:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.574 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.834 [2024-07-25 18:43:49.245691] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.834 [2024-07-25 18:43:49.246061] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:16:48.834 [2024-07-25 18:43:49.246107] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:48.834 [2024-07-25 18:43:49.246330] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:48.834 [2024-07-25 18:43:49.246803] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:16:48.834 [2024-07-25 18:43:49.246927] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:16:48.834 [2024-07-25 18:43:49.247300] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.834 BaseBdev2 00:16:48.834 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:48.834 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:48.834 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:48.834 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:48.834 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:48.834 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:48.834 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.093 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.352 [ 00:16:49.352 { 00:16:49.352 "name": "BaseBdev2", 00:16:49.352 "aliases": [ 00:16:49.352 "182d116a-cdf5-42f0-97ba-01fed9d308c9" 00:16:49.352 ], 00:16:49.352 "product_name": "Malloc disk", 00:16:49.352 "block_size": 512, 00:16:49.352 "num_blocks": 65536, 00:16:49.352 "uuid": "182d116a-cdf5-42f0-97ba-01fed9d308c9", 00:16:49.352 "assigned_rate_limits": { 00:16:49.352 "rw_ios_per_sec": 0, 00:16:49.352 "rw_mbytes_per_sec": 0, 00:16:49.352 "r_mbytes_per_sec": 0, 00:16:49.352 "w_mbytes_per_sec": 0 00:16:49.352 }, 00:16:49.352 "claimed": true, 00:16:49.352 "claim_type": "exclusive_write", 00:16:49.352 "zoned": false, 00:16:49.352 "supported_io_types": { 00:16:49.352 "read": true, 00:16:49.352 "write": true, 00:16:49.352 "unmap": true, 00:16:49.352 "flush": true, 00:16:49.352 "reset": true, 00:16:49.352 "nvme_admin": false, 00:16:49.352 "nvme_io": false, 00:16:49.352 "nvme_io_md": false, 00:16:49.352 "write_zeroes": true, 00:16:49.352 "zcopy": true, 00:16:49.352 "get_zone_info": false, 00:16:49.352 "zone_management": false, 00:16:49.352 "zone_append": false, 00:16:49.352 "compare": false, 00:16:49.352 "compare_and_write": false, 00:16:49.352 "abort": true, 00:16:49.352 "seek_hole": false, 00:16:49.352 "seek_data": false, 00:16:49.352 "copy": true, 00:16:49.352 "nvme_iov_md": false 00:16:49.352 }, 00:16:49.352 "memory_domains": [ 00:16:49.352 { 00:16:49.352 "dma_device_id": "system", 00:16:49.352 "dma_device_type": 1 00:16:49.352 }, 00:16:49.352 { 00:16:49.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.352 "dma_device_type": 2 00:16:49.352 } 00:16:49.352 ], 00:16:49.352 "driver_specific": {} 00:16:49.352 } 00:16:49.352 ] 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.352 18:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.611 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.611 "name": "Existed_Raid", 00:16:49.611 "uuid": "6ee23bc9-9726-47d9-b058-3845e6c6483f", 00:16:49.611 "strip_size_kb": 0, 00:16:49.611 "state": "online", 00:16:49.611 "raid_level": "raid1", 00:16:49.611 "superblock": false, 00:16:49.611 "num_base_bdevs": 2, 00:16:49.611 "num_base_bdevs_discovered": 2, 00:16:49.611 "num_base_bdevs_operational": 2, 00:16:49.611 "base_bdevs_list": [ 00:16:49.611 { 00:16:49.611 "name": "BaseBdev1", 00:16:49.611 "uuid": "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6", 00:16:49.611 "is_configured": true, 00:16:49.611 "data_offset": 0, 00:16:49.611 "data_size": 65536 00:16:49.611 }, 00:16:49.611 { 00:16:49.611 "name": "BaseBdev2", 00:16:49.611 "uuid": "182d116a-cdf5-42f0-97ba-01fed9d308c9", 00:16:49.611 "is_configured": true, 00:16:49.611 "data_offset": 0, 00:16:49.611 "data_size": 65536 00:16:49.611 } 00:16:49.611 ] 00:16:49.611 }' 00:16:49.611 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.611 18:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:50.179 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:50.179 [2024-07-25 18:43:50.750334] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.438 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:50.439 "name": "Existed_Raid", 00:16:50.439 "aliases": [ 00:16:50.439 "6ee23bc9-9726-47d9-b058-3845e6c6483f" 00:16:50.439 ], 00:16:50.439 "product_name": "Raid Volume", 00:16:50.439 "block_size": 512, 00:16:50.439 "num_blocks": 65536, 00:16:50.439 "uuid": "6ee23bc9-9726-47d9-b058-3845e6c6483f", 00:16:50.439 "assigned_rate_limits": { 00:16:50.439 "rw_ios_per_sec": 0, 00:16:50.439 "rw_mbytes_per_sec": 0, 00:16:50.439 "r_mbytes_per_sec": 0, 00:16:50.439 "w_mbytes_per_sec": 0 00:16:50.439 }, 00:16:50.439 "claimed": false, 00:16:50.439 "zoned": false, 00:16:50.439 "supported_io_types": { 00:16:50.439 "read": true, 00:16:50.439 "write": true, 00:16:50.439 "unmap": false, 00:16:50.439 "flush": false, 00:16:50.439 "reset": true, 00:16:50.439 "nvme_admin": false, 00:16:50.439 "nvme_io": false, 00:16:50.439 "nvme_io_md": false, 00:16:50.439 "write_zeroes": true, 00:16:50.439 "zcopy": false, 00:16:50.439 "get_zone_info": false, 00:16:50.439 "zone_management": false, 00:16:50.439 "zone_append": false, 00:16:50.439 "compare": false, 00:16:50.439 "compare_and_write": false, 00:16:50.439 "abort": false, 00:16:50.439 "seek_hole": false, 00:16:50.439 "seek_data": false, 00:16:50.439 "copy": false, 00:16:50.439 "nvme_iov_md": false 00:16:50.439 }, 00:16:50.439 "memory_domains": [ 00:16:50.439 { 00:16:50.439 "dma_device_id": "system", 00:16:50.439 "dma_device_type": 1 00:16:50.439 }, 00:16:50.439 { 00:16:50.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.439 "dma_device_type": 2 00:16:50.439 }, 00:16:50.439 { 00:16:50.439 "dma_device_id": "system", 00:16:50.439 "dma_device_type": 1 00:16:50.439 }, 00:16:50.439 { 00:16:50.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.439 "dma_device_type": 2 00:16:50.439 } 00:16:50.439 ], 00:16:50.439 "driver_specific": { 00:16:50.439 "raid": { 00:16:50.439 "uuid": "6ee23bc9-9726-47d9-b058-3845e6c6483f", 00:16:50.439 "strip_size_kb": 0, 00:16:50.439 "state": "online", 00:16:50.439 "raid_level": "raid1", 00:16:50.439 "superblock": false, 00:16:50.439 "num_base_bdevs": 2, 00:16:50.439 "num_base_bdevs_discovered": 2, 00:16:50.439 "num_base_bdevs_operational": 2, 00:16:50.439 "base_bdevs_list": [ 00:16:50.439 { 00:16:50.439 "name": "BaseBdev1", 00:16:50.439 "uuid": "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6", 00:16:50.439 "is_configured": true, 00:16:50.439 "data_offset": 0, 00:16:50.439 "data_size": 65536 00:16:50.439 }, 00:16:50.439 { 00:16:50.439 "name": "BaseBdev2", 00:16:50.439 "uuid": "182d116a-cdf5-42f0-97ba-01fed9d308c9", 00:16:50.439 "is_configured": true, 00:16:50.439 "data_offset": 0, 00:16:50.439 "data_size": 65536 00:16:50.439 } 00:16:50.439 ] 00:16:50.439 } 00:16:50.439 } 00:16:50.439 }' 00:16:50.439 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.439 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:50.439 BaseBdev2' 00:16:50.439 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.439 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:50.439 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.439 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.439 "name": "BaseBdev1", 00:16:50.439 "aliases": [ 00:16:50.439 "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6" 00:16:50.439 ], 00:16:50.439 "product_name": "Malloc disk", 00:16:50.439 "block_size": 512, 00:16:50.439 "num_blocks": 65536, 00:16:50.439 "uuid": "ea25e63c-fef7-4ee3-b6d9-dd6f9671bcd6", 00:16:50.439 "assigned_rate_limits": { 00:16:50.439 "rw_ios_per_sec": 0, 00:16:50.439 "rw_mbytes_per_sec": 0, 00:16:50.439 "r_mbytes_per_sec": 0, 00:16:50.439 "w_mbytes_per_sec": 0 00:16:50.439 }, 00:16:50.439 "claimed": true, 00:16:50.439 "claim_type": "exclusive_write", 00:16:50.439 "zoned": false, 00:16:50.439 "supported_io_types": { 00:16:50.439 "read": true, 00:16:50.439 "write": true, 00:16:50.439 "unmap": true, 00:16:50.439 "flush": true, 00:16:50.439 "reset": true, 00:16:50.439 "nvme_admin": false, 00:16:50.439 "nvme_io": false, 00:16:50.439 "nvme_io_md": false, 00:16:50.439 "write_zeroes": true, 00:16:50.439 "zcopy": true, 00:16:50.439 "get_zone_info": false, 00:16:50.439 "zone_management": false, 00:16:50.439 "zone_append": false, 00:16:50.439 "compare": false, 00:16:50.439 "compare_and_write": false, 00:16:50.439 "abort": true, 00:16:50.439 "seek_hole": false, 00:16:50.439 "seek_data": false, 00:16:50.439 "copy": true, 00:16:50.439 "nvme_iov_md": false 00:16:50.439 }, 00:16:50.439 "memory_domains": [ 00:16:50.439 { 00:16:50.439 "dma_device_id": "system", 00:16:50.439 "dma_device_type": 1 00:16:50.439 }, 00:16:50.439 { 00:16:50.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.439 "dma_device_type": 2 00:16:50.439 } 00:16:50.439 ], 00:16:50.439 "driver_specific": {} 00:16:50.439 }' 00:16:50.439 18:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.698 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.957 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.957 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.957 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:50.957 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.957 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.957 "name": "BaseBdev2", 00:16:50.957 "aliases": [ 00:16:50.957 "182d116a-cdf5-42f0-97ba-01fed9d308c9" 00:16:50.957 ], 00:16:50.957 "product_name": "Malloc disk", 00:16:50.957 "block_size": 512, 00:16:50.957 "num_blocks": 65536, 00:16:50.957 "uuid": "182d116a-cdf5-42f0-97ba-01fed9d308c9", 00:16:50.957 "assigned_rate_limits": { 00:16:50.957 "rw_ios_per_sec": 0, 00:16:50.957 "rw_mbytes_per_sec": 0, 00:16:50.957 "r_mbytes_per_sec": 0, 00:16:50.957 "w_mbytes_per_sec": 0 00:16:50.957 }, 00:16:50.957 "claimed": true, 00:16:50.957 "claim_type": "exclusive_write", 00:16:50.957 "zoned": false, 00:16:50.957 "supported_io_types": { 00:16:50.957 "read": true, 00:16:50.957 "write": true, 00:16:50.957 "unmap": true, 00:16:50.957 "flush": true, 00:16:50.957 "reset": true, 00:16:50.957 "nvme_admin": false, 00:16:50.957 "nvme_io": false, 00:16:50.957 "nvme_io_md": false, 00:16:50.957 "write_zeroes": true, 00:16:50.957 "zcopy": true, 00:16:50.957 "get_zone_info": false, 00:16:50.957 "zone_management": false, 00:16:50.957 "zone_append": false, 00:16:50.957 "compare": false, 00:16:50.957 "compare_and_write": false, 00:16:50.957 "abort": true, 00:16:50.957 "seek_hole": false, 00:16:50.957 "seek_data": false, 00:16:50.957 "copy": true, 00:16:50.957 "nvme_iov_md": false 00:16:50.957 }, 00:16:50.957 "memory_domains": [ 00:16:50.957 { 00:16:50.957 "dma_device_id": "system", 00:16:50.957 "dma_device_type": 1 00:16:50.957 }, 00:16:50.957 { 00:16:50.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.957 "dma_device_type": 2 00:16:50.957 } 00:16:50.957 ], 00:16:50.957 "driver_specific": {} 00:16:50.957 }' 00:16:50.957 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.957 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.216 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.475 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.475 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.475 18:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:51.733 [2024-07-25 18:43:52.062821] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.733 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.992 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.992 "name": "Existed_Raid", 00:16:51.992 "uuid": "6ee23bc9-9726-47d9-b058-3845e6c6483f", 00:16:51.992 "strip_size_kb": 0, 00:16:51.992 "state": "online", 00:16:51.992 "raid_level": "raid1", 00:16:51.992 "superblock": false, 00:16:51.992 "num_base_bdevs": 2, 00:16:51.992 "num_base_bdevs_discovered": 1, 00:16:51.992 "num_base_bdevs_operational": 1, 00:16:51.992 "base_bdevs_list": [ 00:16:51.992 { 00:16:51.992 "name": null, 00:16:51.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.992 "is_configured": false, 00:16:51.992 "data_offset": 0, 00:16:51.992 "data_size": 65536 00:16:51.992 }, 00:16:51.992 { 00:16:51.992 "name": "BaseBdev2", 00:16:51.992 "uuid": "182d116a-cdf5-42f0-97ba-01fed9d308c9", 00:16:51.992 "is_configured": true, 00:16:51.992 "data_offset": 0, 00:16:51.992 "data_size": 65536 00:16:51.992 } 00:16:51.992 ] 00:16:51.992 }' 00:16:51.992 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.992 18:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.557 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:52.557 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:52.557 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.557 18:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:52.814 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:52.814 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.814 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:52.814 [2024-07-25 18:43:53.363644] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.814 [2024-07-25 18:43:53.363946] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.073 [2024-07-25 18:43:53.444527] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.073 [2024-07-25 18:43:53.444859] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.073 [2024-07-25 18:43:53.445002] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:16:53.073 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:53.073 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:53.073 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.073 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 123075 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 123075 ']' 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 123075 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123075 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123075' 00:16:53.331 killing process with pid 123075 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 123075 00:16:53.331 18:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 123075 00:16:53.331 [2024-07-25 18:43:53.736745] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.331 [2024-07-25 18:43:53.736881] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.707 ************************************ 00:16:54.707 END TEST raid_state_function_test 00:16:54.707 ************************************ 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:54.707 00:16:54.707 real 0m11.171s 00:16:54.707 user 0m18.868s 00:16:54.707 sys 0m1.840s 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.707 18:43:54 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:54.707 18:43:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:54.707 18:43:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.707 18:43:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.707 ************************************ 00:16:54.707 START TEST raid_state_function_test_sb 00:16:54.707 ************************************ 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=123451 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123451' 00:16:54.707 Process raid pid: 123451 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 123451 /var/tmp/spdk-raid.sock 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 123451 ']' 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.707 18:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.707 [2024-07-25 18:43:55.071227] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:54.707 [2024-07-25 18:43:55.071728] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.707 [2024-07-25 18:43:55.258385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.967 [2024-07-25 18:43:55.470477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.226 [2024-07-25 18:43:55.661083] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.485 18:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.485 18:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:55.485 18:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:55.744 [2024-07-25 18:43:56.131581] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.744 [2024-07-25 18:43:56.131880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.744 [2024-07-25 18:43:56.131992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.744 [2024-07-25 18:43:56.132059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.744 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.002 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.002 "name": "Existed_Raid", 00:16:56.002 "uuid": "556efd65-8439-463e-9651-ce662f386e1f", 00:16:56.002 "strip_size_kb": 0, 00:16:56.002 "state": "configuring", 00:16:56.002 "raid_level": "raid1", 00:16:56.002 "superblock": true, 00:16:56.002 "num_base_bdevs": 2, 00:16:56.002 "num_base_bdevs_discovered": 0, 00:16:56.002 "num_base_bdevs_operational": 2, 00:16:56.002 "base_bdevs_list": [ 00:16:56.002 { 00:16:56.002 "name": "BaseBdev1", 00:16:56.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.002 "is_configured": false, 00:16:56.002 "data_offset": 0, 00:16:56.002 "data_size": 0 00:16:56.003 }, 00:16:56.003 { 00:16:56.003 "name": "BaseBdev2", 00:16:56.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.003 "is_configured": false, 00:16:56.003 "data_offset": 0, 00:16:56.003 "data_size": 0 00:16:56.003 } 00:16:56.003 ] 00:16:56.003 }' 00:16:56.003 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.003 18:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.570 18:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.828 [2024-07-25 18:43:57.151669] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.828 [2024-07-25 18:43:57.151881] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:16:56.828 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:56.828 [2024-07-25 18:43:57.327723] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.828 [2024-07-25 18:43:57.327980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.828 [2024-07-25 18:43:57.328122] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.828 [2024-07-25 18:43:57.328185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.828 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:57.087 [2024-07-25 18:43:57.610172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.087 BaseBdev1 00:16:57.087 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:57.087 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:57.087 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:57.087 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:57.087 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:57.087 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:57.087 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.345 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:57.606 [ 00:16:57.606 { 00:16:57.606 "name": "BaseBdev1", 00:16:57.606 "aliases": [ 00:16:57.606 "d208e9d0-6b0b-474a-9009-e1f0f27e6420" 00:16:57.606 ], 00:16:57.606 "product_name": "Malloc disk", 00:16:57.606 "block_size": 512, 00:16:57.606 "num_blocks": 65536, 00:16:57.606 "uuid": "d208e9d0-6b0b-474a-9009-e1f0f27e6420", 00:16:57.606 "assigned_rate_limits": { 00:16:57.606 "rw_ios_per_sec": 0, 00:16:57.606 "rw_mbytes_per_sec": 0, 00:16:57.606 "r_mbytes_per_sec": 0, 00:16:57.606 "w_mbytes_per_sec": 0 00:16:57.606 }, 00:16:57.606 "claimed": true, 00:16:57.606 "claim_type": "exclusive_write", 00:16:57.606 "zoned": false, 00:16:57.606 "supported_io_types": { 00:16:57.606 "read": true, 00:16:57.606 "write": true, 00:16:57.606 "unmap": true, 00:16:57.606 "flush": true, 00:16:57.606 "reset": true, 00:16:57.606 "nvme_admin": false, 00:16:57.606 "nvme_io": false, 00:16:57.606 "nvme_io_md": false, 00:16:57.606 "write_zeroes": true, 00:16:57.606 "zcopy": true, 00:16:57.606 "get_zone_info": false, 00:16:57.606 "zone_management": false, 00:16:57.606 "zone_append": false, 00:16:57.606 "compare": false, 00:16:57.606 "compare_and_write": false, 00:16:57.606 "abort": true, 00:16:57.606 "seek_hole": false, 00:16:57.606 "seek_data": false, 00:16:57.606 "copy": true, 00:16:57.606 "nvme_iov_md": false 00:16:57.606 }, 00:16:57.606 "memory_domains": [ 00:16:57.606 { 00:16:57.606 "dma_device_id": "system", 00:16:57.606 "dma_device_type": 1 00:16:57.606 }, 00:16:57.606 { 00:16:57.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.606 "dma_device_type": 2 00:16:57.606 } 00:16:57.606 ], 00:16:57.606 "driver_specific": {} 00:16:57.606 } 00:16:57.606 ] 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.606 18:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.927 18:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.927 "name": "Existed_Raid", 00:16:57.927 "uuid": "60724536-079a-4f25-a461-043b7fd485ee", 00:16:57.927 "strip_size_kb": 0, 00:16:57.927 "state": "configuring", 00:16:57.927 "raid_level": "raid1", 00:16:57.927 "superblock": true, 00:16:57.928 "num_base_bdevs": 2, 00:16:57.928 "num_base_bdevs_discovered": 1, 00:16:57.928 "num_base_bdevs_operational": 2, 00:16:57.928 "base_bdevs_list": [ 00:16:57.928 { 00:16:57.928 "name": "BaseBdev1", 00:16:57.928 "uuid": "d208e9d0-6b0b-474a-9009-e1f0f27e6420", 00:16:57.928 "is_configured": true, 00:16:57.928 "data_offset": 2048, 00:16:57.928 "data_size": 63488 00:16:57.928 }, 00:16:57.928 { 00:16:57.928 "name": "BaseBdev2", 00:16:57.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.928 "is_configured": false, 00:16:57.928 "data_offset": 0, 00:16:57.928 "data_size": 0 00:16:57.928 } 00:16:57.928 ] 00:16:57.928 }' 00:16:57.928 18:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.928 18:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.493 18:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:58.751 [2024-07-25 18:43:59.106503] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.751 [2024-07-25 18:43:59.106705] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:16:58.752 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:59.010 [2024-07-25 18:43:59.358590] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.010 [2024-07-25 18:43:59.361026] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:59.010 [2024-07-25 18:43:59.361205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.010 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.011 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.011 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.011 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.011 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.269 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.269 "name": "Existed_Raid", 00:16:59.269 "uuid": "ef797f0e-7266-42ad-af48-2daa84916428", 00:16:59.269 "strip_size_kb": 0, 00:16:59.269 "state": "configuring", 00:16:59.269 "raid_level": "raid1", 00:16:59.269 "superblock": true, 00:16:59.269 "num_base_bdevs": 2, 00:16:59.269 "num_base_bdevs_discovered": 1, 00:16:59.269 "num_base_bdevs_operational": 2, 00:16:59.269 "base_bdevs_list": [ 00:16:59.269 { 00:16:59.269 "name": "BaseBdev1", 00:16:59.269 "uuid": "d208e9d0-6b0b-474a-9009-e1f0f27e6420", 00:16:59.269 "is_configured": true, 00:16:59.269 "data_offset": 2048, 00:16:59.269 "data_size": 63488 00:16:59.269 }, 00:16:59.269 { 00:16:59.269 "name": "BaseBdev2", 00:16:59.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.269 "is_configured": false, 00:16:59.269 "data_offset": 0, 00:16:59.269 "data_size": 0 00:16:59.269 } 00:16:59.269 ] 00:16:59.269 }' 00:16:59.269 18:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.269 18:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.836 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:00.094 [2024-07-25 18:44:00.484724] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.094 [2024-07-25 18:44:00.485279] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:00.094 [2024-07-25 18:44:00.485400] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:00.094 [2024-07-25 18:44:00.485565] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:00.094 [2024-07-25 18:44:00.486082] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:00.094 [2024-07-25 18:44:00.486195] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:17:00.094 [2024-07-25 18:44:00.486424] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.094 BaseBdev2 00:17:00.094 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:00.094 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:00.094 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:00.094 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:00.094 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:00.094 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:00.094 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.353 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.612 [ 00:17:00.612 { 00:17:00.612 "name": "BaseBdev2", 00:17:00.612 "aliases": [ 00:17:00.612 "289d2bc5-61d1-4c2f-b623-6cfe2cef9b72" 00:17:00.612 ], 00:17:00.612 "product_name": "Malloc disk", 00:17:00.612 "block_size": 512, 00:17:00.612 "num_blocks": 65536, 00:17:00.612 "uuid": "289d2bc5-61d1-4c2f-b623-6cfe2cef9b72", 00:17:00.612 "assigned_rate_limits": { 00:17:00.612 "rw_ios_per_sec": 0, 00:17:00.612 "rw_mbytes_per_sec": 0, 00:17:00.612 "r_mbytes_per_sec": 0, 00:17:00.612 "w_mbytes_per_sec": 0 00:17:00.612 }, 00:17:00.612 "claimed": true, 00:17:00.612 "claim_type": "exclusive_write", 00:17:00.612 "zoned": false, 00:17:00.612 "supported_io_types": { 00:17:00.612 "read": true, 00:17:00.612 "write": true, 00:17:00.612 "unmap": true, 00:17:00.612 "flush": true, 00:17:00.612 "reset": true, 00:17:00.612 "nvme_admin": false, 00:17:00.612 "nvme_io": false, 00:17:00.612 "nvme_io_md": false, 00:17:00.612 "write_zeroes": true, 00:17:00.612 "zcopy": true, 00:17:00.612 "get_zone_info": false, 00:17:00.612 "zone_management": false, 00:17:00.612 "zone_append": false, 00:17:00.612 "compare": false, 00:17:00.612 "compare_and_write": false, 00:17:00.612 "abort": true, 00:17:00.612 "seek_hole": false, 00:17:00.612 "seek_data": false, 00:17:00.612 "copy": true, 00:17:00.612 "nvme_iov_md": false 00:17:00.612 }, 00:17:00.612 "memory_domains": [ 00:17:00.612 { 00:17:00.612 "dma_device_id": "system", 00:17:00.612 "dma_device_type": 1 00:17:00.612 }, 00:17:00.612 { 00:17:00.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.612 "dma_device_type": 2 00:17:00.612 } 00:17:00.612 ], 00:17:00.612 "driver_specific": {} 00:17:00.612 } 00:17:00.612 ] 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.612 18:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.612 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.612 "name": "Existed_Raid", 00:17:00.612 "uuid": "ef797f0e-7266-42ad-af48-2daa84916428", 00:17:00.612 "strip_size_kb": 0, 00:17:00.612 "state": "online", 00:17:00.612 "raid_level": "raid1", 00:17:00.612 "superblock": true, 00:17:00.612 "num_base_bdevs": 2, 00:17:00.612 "num_base_bdevs_discovered": 2, 00:17:00.612 "num_base_bdevs_operational": 2, 00:17:00.612 "base_bdevs_list": [ 00:17:00.612 { 00:17:00.612 "name": "BaseBdev1", 00:17:00.612 "uuid": "d208e9d0-6b0b-474a-9009-e1f0f27e6420", 00:17:00.612 "is_configured": true, 00:17:00.612 "data_offset": 2048, 00:17:00.612 "data_size": 63488 00:17:00.612 }, 00:17:00.612 { 00:17:00.612 "name": "BaseBdev2", 00:17:00.612 "uuid": "289d2bc5-61d1-4c2f-b623-6cfe2cef9b72", 00:17:00.612 "is_configured": true, 00:17:00.612 "data_offset": 2048, 00:17:00.612 "data_size": 63488 00:17:00.612 } 00:17:00.612 ] 00:17:00.612 }' 00:17:00.612 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.612 18:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:01.548 [2024-07-25 18:44:01.977253] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:01.548 "name": "Existed_Raid", 00:17:01.548 "aliases": [ 00:17:01.548 "ef797f0e-7266-42ad-af48-2daa84916428" 00:17:01.548 ], 00:17:01.548 "product_name": "Raid Volume", 00:17:01.548 "block_size": 512, 00:17:01.548 "num_blocks": 63488, 00:17:01.548 "uuid": "ef797f0e-7266-42ad-af48-2daa84916428", 00:17:01.548 "assigned_rate_limits": { 00:17:01.548 "rw_ios_per_sec": 0, 00:17:01.548 "rw_mbytes_per_sec": 0, 00:17:01.548 "r_mbytes_per_sec": 0, 00:17:01.548 "w_mbytes_per_sec": 0 00:17:01.548 }, 00:17:01.548 "claimed": false, 00:17:01.548 "zoned": false, 00:17:01.548 "supported_io_types": { 00:17:01.548 "read": true, 00:17:01.548 "write": true, 00:17:01.548 "unmap": false, 00:17:01.548 "flush": false, 00:17:01.548 "reset": true, 00:17:01.548 "nvme_admin": false, 00:17:01.548 "nvme_io": false, 00:17:01.548 "nvme_io_md": false, 00:17:01.548 "write_zeroes": true, 00:17:01.548 "zcopy": false, 00:17:01.548 "get_zone_info": false, 00:17:01.548 "zone_management": false, 00:17:01.548 "zone_append": false, 00:17:01.548 "compare": false, 00:17:01.548 "compare_and_write": false, 00:17:01.548 "abort": false, 00:17:01.548 "seek_hole": false, 00:17:01.548 "seek_data": false, 00:17:01.548 "copy": false, 00:17:01.548 "nvme_iov_md": false 00:17:01.548 }, 00:17:01.548 "memory_domains": [ 00:17:01.548 { 00:17:01.548 "dma_device_id": "system", 00:17:01.548 "dma_device_type": 1 00:17:01.548 }, 00:17:01.548 { 00:17:01.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.548 "dma_device_type": 2 00:17:01.548 }, 00:17:01.548 { 00:17:01.548 "dma_device_id": "system", 00:17:01.548 "dma_device_type": 1 00:17:01.548 }, 00:17:01.548 { 00:17:01.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.548 "dma_device_type": 2 00:17:01.548 } 00:17:01.548 ], 00:17:01.548 "driver_specific": { 00:17:01.548 "raid": { 00:17:01.548 "uuid": "ef797f0e-7266-42ad-af48-2daa84916428", 00:17:01.548 "strip_size_kb": 0, 00:17:01.548 "state": "online", 00:17:01.548 "raid_level": "raid1", 00:17:01.548 "superblock": true, 00:17:01.548 "num_base_bdevs": 2, 00:17:01.548 "num_base_bdevs_discovered": 2, 00:17:01.548 "num_base_bdevs_operational": 2, 00:17:01.548 "base_bdevs_list": [ 00:17:01.548 { 00:17:01.548 "name": "BaseBdev1", 00:17:01.548 "uuid": "d208e9d0-6b0b-474a-9009-e1f0f27e6420", 00:17:01.548 "is_configured": true, 00:17:01.548 "data_offset": 2048, 00:17:01.548 "data_size": 63488 00:17:01.548 }, 00:17:01.548 { 00:17:01.548 "name": "BaseBdev2", 00:17:01.548 "uuid": "289d2bc5-61d1-4c2f-b623-6cfe2cef9b72", 00:17:01.548 "is_configured": true, 00:17:01.548 "data_offset": 2048, 00:17:01.548 "data_size": 63488 00:17:01.548 } 00:17:01.548 ] 00:17:01.548 } 00:17:01.548 } 00:17:01.548 }' 00:17:01.548 18:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:01.548 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:01.548 BaseBdev2' 00:17:01.548 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.548 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:01.548 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.807 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.807 "name": "BaseBdev1", 00:17:01.807 "aliases": [ 00:17:01.807 "d208e9d0-6b0b-474a-9009-e1f0f27e6420" 00:17:01.807 ], 00:17:01.807 "product_name": "Malloc disk", 00:17:01.807 "block_size": 512, 00:17:01.807 "num_blocks": 65536, 00:17:01.807 "uuid": "d208e9d0-6b0b-474a-9009-e1f0f27e6420", 00:17:01.807 "assigned_rate_limits": { 00:17:01.807 "rw_ios_per_sec": 0, 00:17:01.807 "rw_mbytes_per_sec": 0, 00:17:01.807 "r_mbytes_per_sec": 0, 00:17:01.807 "w_mbytes_per_sec": 0 00:17:01.807 }, 00:17:01.807 "claimed": true, 00:17:01.807 "claim_type": "exclusive_write", 00:17:01.807 "zoned": false, 00:17:01.807 "supported_io_types": { 00:17:01.808 "read": true, 00:17:01.808 "write": true, 00:17:01.808 "unmap": true, 00:17:01.808 "flush": true, 00:17:01.808 "reset": true, 00:17:01.808 "nvme_admin": false, 00:17:01.808 "nvme_io": false, 00:17:01.808 "nvme_io_md": false, 00:17:01.808 "write_zeroes": true, 00:17:01.808 "zcopy": true, 00:17:01.808 "get_zone_info": false, 00:17:01.808 "zone_management": false, 00:17:01.808 "zone_append": false, 00:17:01.808 "compare": false, 00:17:01.808 "compare_and_write": false, 00:17:01.808 "abort": true, 00:17:01.808 "seek_hole": false, 00:17:01.808 "seek_data": false, 00:17:01.808 "copy": true, 00:17:01.808 "nvme_iov_md": false 00:17:01.808 }, 00:17:01.808 "memory_domains": [ 00:17:01.808 { 00:17:01.808 "dma_device_id": "system", 00:17:01.808 "dma_device_type": 1 00:17:01.808 }, 00:17:01.808 { 00:17:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.808 "dma_device_type": 2 00:17:01.808 } 00:17:01.808 ], 00:17:01.808 "driver_specific": {} 00:17:01.808 }' 00:17:01.808 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.808 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.808 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.808 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.066 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.066 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:02.067 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:02.325 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:02.325 "name": "BaseBdev2", 00:17:02.325 "aliases": [ 00:17:02.325 "289d2bc5-61d1-4c2f-b623-6cfe2cef9b72" 00:17:02.325 ], 00:17:02.325 "product_name": "Malloc disk", 00:17:02.325 "block_size": 512, 00:17:02.325 "num_blocks": 65536, 00:17:02.325 "uuid": "289d2bc5-61d1-4c2f-b623-6cfe2cef9b72", 00:17:02.325 "assigned_rate_limits": { 00:17:02.325 "rw_ios_per_sec": 0, 00:17:02.325 "rw_mbytes_per_sec": 0, 00:17:02.325 "r_mbytes_per_sec": 0, 00:17:02.325 "w_mbytes_per_sec": 0 00:17:02.325 }, 00:17:02.325 "claimed": true, 00:17:02.325 "claim_type": "exclusive_write", 00:17:02.325 "zoned": false, 00:17:02.325 "supported_io_types": { 00:17:02.325 "read": true, 00:17:02.325 "write": true, 00:17:02.325 "unmap": true, 00:17:02.325 "flush": true, 00:17:02.325 "reset": true, 00:17:02.325 "nvme_admin": false, 00:17:02.325 "nvme_io": false, 00:17:02.325 "nvme_io_md": false, 00:17:02.325 "write_zeroes": true, 00:17:02.326 "zcopy": true, 00:17:02.326 "get_zone_info": false, 00:17:02.326 "zone_management": false, 00:17:02.326 "zone_append": false, 00:17:02.326 "compare": false, 00:17:02.326 "compare_and_write": false, 00:17:02.326 "abort": true, 00:17:02.326 "seek_hole": false, 00:17:02.326 "seek_data": false, 00:17:02.326 "copy": true, 00:17:02.326 "nvme_iov_md": false 00:17:02.326 }, 00:17:02.326 "memory_domains": [ 00:17:02.326 { 00:17:02.326 "dma_device_id": "system", 00:17:02.326 "dma_device_type": 1 00:17:02.326 }, 00:17:02.326 { 00:17:02.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.326 "dma_device_type": 2 00:17:02.326 } 00:17:02.326 ], 00:17:02.326 "driver_specific": {} 00:17:02.326 }' 00:17:02.326 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.584 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.584 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:02.584 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.584 18:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.584 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.584 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.584 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.584 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.584 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.584 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.843 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.843 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:02.843 [2024-07-25 18:44:03.321394] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.102 "name": "Existed_Raid", 00:17:03.102 "uuid": "ef797f0e-7266-42ad-af48-2daa84916428", 00:17:03.102 "strip_size_kb": 0, 00:17:03.102 "state": "online", 00:17:03.102 "raid_level": "raid1", 00:17:03.102 "superblock": true, 00:17:03.102 "num_base_bdevs": 2, 00:17:03.102 "num_base_bdevs_discovered": 1, 00:17:03.102 "num_base_bdevs_operational": 1, 00:17:03.102 "base_bdevs_list": [ 00:17:03.102 { 00:17:03.102 "name": null, 00:17:03.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.102 "is_configured": false, 00:17:03.102 "data_offset": 2048, 00:17:03.102 "data_size": 63488 00:17:03.102 }, 00:17:03.102 { 00:17:03.102 "name": "BaseBdev2", 00:17:03.102 "uuid": "289d2bc5-61d1-4c2f-b623-6cfe2cef9b72", 00:17:03.102 "is_configured": true, 00:17:03.102 "data_offset": 2048, 00:17:03.102 "data_size": 63488 00:17:03.102 } 00:17:03.102 ] 00:17:03.102 }' 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.102 18:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.038 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:04.038 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:04.038 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.038 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:04.038 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:04.038 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.038 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:04.297 [2024-07-25 18:44:04.755132] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.297 [2024-07-25 18:44:04.755432] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.297 [2024-07-25 18:44:04.840183] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.297 [2024-07-25 18:44:04.840424] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.297 [2024-07-25 18:44:04.840529] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:17:04.297 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:04.297 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:04.297 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.297 18:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.555 18:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:04.555 18:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:04.555 18:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:04.555 18:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 123451 00:17:04.555 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 123451 ']' 00:17:04.555 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 123451 00:17:04.555 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:04.814 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.814 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123451 00:17:04.814 killing process with pid 123451 00:17:04.814 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.814 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.814 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123451' 00:17:04.814 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 123451 00:17:04.814 [2024-07-25 18:44:05.150432] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.814 18:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 123451 00:17:04.814 [2024-07-25 18:44:05.150557] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.190 ************************************ 00:17:06.190 END TEST raid_state_function_test_sb 00:17:06.190 ************************************ 00:17:06.190 18:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:06.190 00:17:06.190 real 0m11.354s 00:17:06.190 user 0m19.113s 00:17:06.190 sys 0m2.034s 00:17:06.190 18:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.191 18:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.191 18:44:06 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:17:06.191 18:44:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:06.191 18:44:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.191 18:44:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.191 ************************************ 00:17:06.191 START TEST raid_superblock_test 00:17:06.191 ************************************ 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=123826 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 123826 /var/tmp/spdk-raid.sock 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 123826 ']' 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.191 18:44:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.191 [2024-07-25 18:44:06.494779] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:06.191 [2024-07-25 18:44:06.495265] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123826 ] 00:17:06.191 [2024-07-25 18:44:06.677438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.450 [2024-07-25 18:44:06.881849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.708 [2024-07-25 18:44:07.092951] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:06.967 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:06.968 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:07.227 malloc1 00:17:07.227 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.486 [2024-07-25 18:44:07.841382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.486 [2024-07-25 18:44:07.841696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.486 [2024-07-25 18:44:07.841807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:07.486 [2024-07-25 18:44:07.841920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.486 [2024-07-25 18:44:07.844666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.486 [2024-07-25 18:44:07.844850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.486 pt1 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:07.487 18:44:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:07.746 malloc2 00:17:07.746 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.746 [2024-07-25 18:44:08.263040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.746 [2024-07-25 18:44:08.263351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.746 [2024-07-25 18:44:08.263430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:07.746 [2024-07-25 18:44:08.263708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.746 [2024-07-25 18:44:08.266413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.746 [2024-07-25 18:44:08.266559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.746 pt2 00:17:07.746 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:07.746 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:07.746 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:08.004 [2024-07-25 18:44:08.523158] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.004 [2024-07-25 18:44:08.525607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.004 [2024-07-25 18:44:08.525934] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:17:08.004 [2024-07-25 18:44:08.526059] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:08.004 [2024-07-25 18:44:08.526245] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:08.004 [2024-07-25 18:44:08.526757] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:17:08.004 [2024-07-25 18:44:08.526867] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:17:08.004 [2024-07-25 18:44:08.527218] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.004 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.263 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.263 "name": "raid_bdev1", 00:17:08.263 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:08.263 "strip_size_kb": 0, 00:17:08.263 "state": "online", 00:17:08.263 "raid_level": "raid1", 00:17:08.263 "superblock": true, 00:17:08.263 "num_base_bdevs": 2, 00:17:08.263 "num_base_bdevs_discovered": 2, 00:17:08.263 "num_base_bdevs_operational": 2, 00:17:08.263 "base_bdevs_list": [ 00:17:08.263 { 00:17:08.263 "name": "pt1", 00:17:08.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.263 "is_configured": true, 00:17:08.263 "data_offset": 2048, 00:17:08.263 "data_size": 63488 00:17:08.263 }, 00:17:08.263 { 00:17:08.263 "name": "pt2", 00:17:08.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.263 "is_configured": true, 00:17:08.263 "data_offset": 2048, 00:17:08.263 "data_size": 63488 00:17:08.263 } 00:17:08.263 ] 00:17:08.263 }' 00:17:08.263 18:44:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.263 18:44:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:08.830 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:09.089 [2024-07-25 18:44:09.511571] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.089 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:09.089 "name": "raid_bdev1", 00:17:09.089 "aliases": [ 00:17:09.089 "2ed9ea13-6c60-4b8b-b015-224db017329b" 00:17:09.089 ], 00:17:09.089 "product_name": "Raid Volume", 00:17:09.089 "block_size": 512, 00:17:09.089 "num_blocks": 63488, 00:17:09.089 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:09.089 "assigned_rate_limits": { 00:17:09.089 "rw_ios_per_sec": 0, 00:17:09.089 "rw_mbytes_per_sec": 0, 00:17:09.089 "r_mbytes_per_sec": 0, 00:17:09.089 "w_mbytes_per_sec": 0 00:17:09.089 }, 00:17:09.089 "claimed": false, 00:17:09.089 "zoned": false, 00:17:09.089 "supported_io_types": { 00:17:09.089 "read": true, 00:17:09.089 "write": true, 00:17:09.089 "unmap": false, 00:17:09.089 "flush": false, 00:17:09.089 "reset": true, 00:17:09.089 "nvme_admin": false, 00:17:09.089 "nvme_io": false, 00:17:09.089 "nvme_io_md": false, 00:17:09.089 "write_zeroes": true, 00:17:09.089 "zcopy": false, 00:17:09.089 "get_zone_info": false, 00:17:09.089 "zone_management": false, 00:17:09.089 "zone_append": false, 00:17:09.089 "compare": false, 00:17:09.089 "compare_and_write": false, 00:17:09.089 "abort": false, 00:17:09.089 "seek_hole": false, 00:17:09.089 "seek_data": false, 00:17:09.089 "copy": false, 00:17:09.089 "nvme_iov_md": false 00:17:09.089 }, 00:17:09.089 "memory_domains": [ 00:17:09.089 { 00:17:09.089 "dma_device_id": "system", 00:17:09.089 "dma_device_type": 1 00:17:09.089 }, 00:17:09.089 { 00:17:09.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.089 "dma_device_type": 2 00:17:09.089 }, 00:17:09.089 { 00:17:09.089 "dma_device_id": "system", 00:17:09.089 "dma_device_type": 1 00:17:09.089 }, 00:17:09.089 { 00:17:09.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.089 "dma_device_type": 2 00:17:09.089 } 00:17:09.089 ], 00:17:09.089 "driver_specific": { 00:17:09.089 "raid": { 00:17:09.089 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:09.089 "strip_size_kb": 0, 00:17:09.089 "state": "online", 00:17:09.089 "raid_level": "raid1", 00:17:09.089 "superblock": true, 00:17:09.089 "num_base_bdevs": 2, 00:17:09.089 "num_base_bdevs_discovered": 2, 00:17:09.089 "num_base_bdevs_operational": 2, 00:17:09.089 "base_bdevs_list": [ 00:17:09.089 { 00:17:09.089 "name": "pt1", 00:17:09.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.089 "is_configured": true, 00:17:09.089 "data_offset": 2048, 00:17:09.089 "data_size": 63488 00:17:09.089 }, 00:17:09.089 { 00:17:09.089 "name": "pt2", 00:17:09.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.089 "is_configured": true, 00:17:09.089 "data_offset": 2048, 00:17:09.089 "data_size": 63488 00:17:09.089 } 00:17:09.089 ] 00:17:09.089 } 00:17:09.089 } 00:17:09.089 }' 00:17:09.089 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.089 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:09.089 pt2' 00:17:09.089 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:09.089 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:09.089 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:09.347 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:09.347 "name": "pt1", 00:17:09.347 "aliases": [ 00:17:09.347 "00000000-0000-0000-0000-000000000001" 00:17:09.347 ], 00:17:09.347 "product_name": "passthru", 00:17:09.347 "block_size": 512, 00:17:09.347 "num_blocks": 65536, 00:17:09.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:09.347 "assigned_rate_limits": { 00:17:09.347 "rw_ios_per_sec": 0, 00:17:09.347 "rw_mbytes_per_sec": 0, 00:17:09.347 "r_mbytes_per_sec": 0, 00:17:09.347 "w_mbytes_per_sec": 0 00:17:09.347 }, 00:17:09.347 "claimed": true, 00:17:09.347 "claim_type": "exclusive_write", 00:17:09.347 "zoned": false, 00:17:09.347 "supported_io_types": { 00:17:09.347 "read": true, 00:17:09.347 "write": true, 00:17:09.347 "unmap": true, 00:17:09.347 "flush": true, 00:17:09.347 "reset": true, 00:17:09.347 "nvme_admin": false, 00:17:09.347 "nvme_io": false, 00:17:09.347 "nvme_io_md": false, 00:17:09.347 "write_zeroes": true, 00:17:09.347 "zcopy": true, 00:17:09.347 "get_zone_info": false, 00:17:09.347 "zone_management": false, 00:17:09.347 "zone_append": false, 00:17:09.347 "compare": false, 00:17:09.347 "compare_and_write": false, 00:17:09.347 "abort": true, 00:17:09.347 "seek_hole": false, 00:17:09.347 "seek_data": false, 00:17:09.347 "copy": true, 00:17:09.347 "nvme_iov_md": false 00:17:09.347 }, 00:17:09.347 "memory_domains": [ 00:17:09.347 { 00:17:09.347 "dma_device_id": "system", 00:17:09.347 "dma_device_type": 1 00:17:09.347 }, 00:17:09.347 { 00:17:09.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.347 "dma_device_type": 2 00:17:09.347 } 00:17:09.347 ], 00:17:09.347 "driver_specific": { 00:17:09.347 "passthru": { 00:17:09.347 "name": "pt1", 00:17:09.347 "base_bdev_name": "malloc1" 00:17:09.347 } 00:17:09.347 } 00:17:09.347 }' 00:17:09.347 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.347 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.347 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:09.347 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.605 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.605 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:09.605 18:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.605 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.605 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:09.605 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.605 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.605 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:09.605 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:09.864 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:09.864 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:10.122 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:10.122 "name": "pt2", 00:17:10.122 "aliases": [ 00:17:10.122 "00000000-0000-0000-0000-000000000002" 00:17:10.122 ], 00:17:10.122 "product_name": "passthru", 00:17:10.122 "block_size": 512, 00:17:10.122 "num_blocks": 65536, 00:17:10.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.122 "assigned_rate_limits": { 00:17:10.122 "rw_ios_per_sec": 0, 00:17:10.122 "rw_mbytes_per_sec": 0, 00:17:10.122 "r_mbytes_per_sec": 0, 00:17:10.122 "w_mbytes_per_sec": 0 00:17:10.122 }, 00:17:10.122 "claimed": true, 00:17:10.122 "claim_type": "exclusive_write", 00:17:10.123 "zoned": false, 00:17:10.123 "supported_io_types": { 00:17:10.123 "read": true, 00:17:10.123 "write": true, 00:17:10.123 "unmap": true, 00:17:10.123 "flush": true, 00:17:10.123 "reset": true, 00:17:10.123 "nvme_admin": false, 00:17:10.123 "nvme_io": false, 00:17:10.123 "nvme_io_md": false, 00:17:10.123 "write_zeroes": true, 00:17:10.123 "zcopy": true, 00:17:10.123 "get_zone_info": false, 00:17:10.123 "zone_management": false, 00:17:10.123 "zone_append": false, 00:17:10.123 "compare": false, 00:17:10.123 "compare_and_write": false, 00:17:10.123 "abort": true, 00:17:10.123 "seek_hole": false, 00:17:10.123 "seek_data": false, 00:17:10.123 "copy": true, 00:17:10.123 "nvme_iov_md": false 00:17:10.123 }, 00:17:10.123 "memory_domains": [ 00:17:10.123 { 00:17:10.123 "dma_device_id": "system", 00:17:10.123 "dma_device_type": 1 00:17:10.123 }, 00:17:10.123 { 00:17:10.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.123 "dma_device_type": 2 00:17:10.123 } 00:17:10.123 ], 00:17:10.123 "driver_specific": { 00:17:10.123 "passthru": { 00:17:10.123 "name": "pt2", 00:17:10.123 "base_bdev_name": "malloc2" 00:17:10.123 } 00:17:10.123 } 00:17:10.123 }' 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:10.123 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:10.381 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:10.381 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:10.381 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:10.381 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:10.381 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:10.381 18:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:17:10.641 [2024-07-25 18:44:11.079833] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.641 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=2ed9ea13-6c60-4b8b-b015-224db017329b 00:17:10.641 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 2ed9ea13-6c60-4b8b-b015-224db017329b ']' 00:17:10.641 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:10.900 [2024-07-25 18:44:11.259625] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.900 [2024-07-25 18:44:11.259813] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.900 [2024-07-25 18:44:11.260043] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.900 [2024-07-25 18:44:11.260147] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.900 [2024-07-25 18:44:11.260340] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:17:10.900 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.900 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:17:10.900 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:17:10.900 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:17:10.900 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:10.900 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:11.159 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:11.159 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:11.417 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:11.417 18:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:11.676 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:11.676 [2024-07-25 18:44:12.231826] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:11.676 [2024-07-25 18:44:12.234235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:11.676 [2024-07-25 18:44:12.234451] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:11.676 [2024-07-25 18:44:12.234639] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:11.676 [2024-07-25 18:44:12.234782] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.676 [2024-07-25 18:44:12.234855] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:17:11.676 request: 00:17:11.676 { 00:17:11.676 "name": "raid_bdev1", 00:17:11.676 "raid_level": "raid1", 00:17:11.676 "base_bdevs": [ 00:17:11.676 "malloc1", 00:17:11.676 "malloc2" 00:17:11.676 ], 00:17:11.676 "superblock": false, 00:17:11.676 "method": "bdev_raid_create", 00:17:11.676 "req_id": 1 00:17:11.676 } 00:17:11.676 Got JSON-RPC error response 00:17:11.676 response: 00:17:11.676 { 00:17:11.676 "code": -17, 00:17:11.676 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:11.676 } 00:17:11.935 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:11.935 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:11.935 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:11.935 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:11.935 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.935 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.194 [2024-07-25 18:44:12.679887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.194 [2024-07-25 18:44:12.680156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.194 [2024-07-25 18:44:12.680326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:12.194 [2024-07-25 18:44:12.680424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.194 [2024-07-25 18:44:12.683106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.194 [2024-07-25 18:44:12.683290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.194 [2024-07-25 18:44:12.683477] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:12.194 [2024-07-25 18:44:12.683658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.194 pt1 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.194 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.453 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.453 "name": "raid_bdev1", 00:17:12.453 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:12.453 "strip_size_kb": 0, 00:17:12.453 "state": "configuring", 00:17:12.453 "raid_level": "raid1", 00:17:12.453 "superblock": true, 00:17:12.453 "num_base_bdevs": 2, 00:17:12.453 "num_base_bdevs_discovered": 1, 00:17:12.453 "num_base_bdevs_operational": 2, 00:17:12.453 "base_bdevs_list": [ 00:17:12.453 { 00:17:12.453 "name": "pt1", 00:17:12.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.453 "is_configured": true, 00:17:12.453 "data_offset": 2048, 00:17:12.453 "data_size": 63488 00:17:12.453 }, 00:17:12.453 { 00:17:12.453 "name": null, 00:17:12.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.453 "is_configured": false, 00:17:12.453 "data_offset": 2048, 00:17:12.453 "data_size": 63488 00:17:12.453 } 00:17:12.453 ] 00:17:12.453 }' 00:17:12.453 18:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.453 18:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.019 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:17:13.019 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:13.019 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:13.019 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.277 [2024-07-25 18:44:13.748347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.277 [2024-07-25 18:44:13.748611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.277 [2024-07-25 18:44:13.748680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.277 [2024-07-25 18:44:13.748775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.277 [2024-07-25 18:44:13.749321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.277 [2024-07-25 18:44:13.749473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.277 [2024-07-25 18:44:13.749664] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:13.277 [2024-07-25 18:44:13.749778] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.277 [2024-07-25 18:44:13.749955] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:13.277 [2024-07-25 18:44:13.750109] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:13.277 [2024-07-25 18:44:13.750239] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:13.277 [2024-07-25 18:44:13.750670] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:13.277 [2024-07-25 18:44:13.750775] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:17:13.277 [2024-07-25 18:44:13.750987] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.277 pt2 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.277 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.536 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.536 "name": "raid_bdev1", 00:17:13.536 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:13.536 "strip_size_kb": 0, 00:17:13.536 "state": "online", 00:17:13.536 "raid_level": "raid1", 00:17:13.536 "superblock": true, 00:17:13.536 "num_base_bdevs": 2, 00:17:13.536 "num_base_bdevs_discovered": 2, 00:17:13.536 "num_base_bdevs_operational": 2, 00:17:13.536 "base_bdevs_list": [ 00:17:13.536 { 00:17:13.536 "name": "pt1", 00:17:13.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.536 "is_configured": true, 00:17:13.536 "data_offset": 2048, 00:17:13.536 "data_size": 63488 00:17:13.536 }, 00:17:13.536 { 00:17:13.536 "name": "pt2", 00:17:13.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.536 "is_configured": true, 00:17:13.536 "data_offset": 2048, 00:17:13.536 "data_size": 63488 00:17:13.536 } 00:17:13.536 ] 00:17:13.536 }' 00:17:13.536 18:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.536 18:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:14.102 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:14.361 [2024-07-25 18:44:14.744717] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.361 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:14.361 "name": "raid_bdev1", 00:17:14.361 "aliases": [ 00:17:14.361 "2ed9ea13-6c60-4b8b-b015-224db017329b" 00:17:14.361 ], 00:17:14.361 "product_name": "Raid Volume", 00:17:14.361 "block_size": 512, 00:17:14.361 "num_blocks": 63488, 00:17:14.361 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:14.361 "assigned_rate_limits": { 00:17:14.361 "rw_ios_per_sec": 0, 00:17:14.361 "rw_mbytes_per_sec": 0, 00:17:14.361 "r_mbytes_per_sec": 0, 00:17:14.361 "w_mbytes_per_sec": 0 00:17:14.361 }, 00:17:14.361 "claimed": false, 00:17:14.361 "zoned": false, 00:17:14.361 "supported_io_types": { 00:17:14.361 "read": true, 00:17:14.361 "write": true, 00:17:14.361 "unmap": false, 00:17:14.361 "flush": false, 00:17:14.361 "reset": true, 00:17:14.361 "nvme_admin": false, 00:17:14.361 "nvme_io": false, 00:17:14.361 "nvme_io_md": false, 00:17:14.361 "write_zeroes": true, 00:17:14.361 "zcopy": false, 00:17:14.361 "get_zone_info": false, 00:17:14.361 "zone_management": false, 00:17:14.361 "zone_append": false, 00:17:14.361 "compare": false, 00:17:14.361 "compare_and_write": false, 00:17:14.361 "abort": false, 00:17:14.361 "seek_hole": false, 00:17:14.361 "seek_data": false, 00:17:14.361 "copy": false, 00:17:14.361 "nvme_iov_md": false 00:17:14.361 }, 00:17:14.361 "memory_domains": [ 00:17:14.361 { 00:17:14.361 "dma_device_id": "system", 00:17:14.361 "dma_device_type": 1 00:17:14.361 }, 00:17:14.361 { 00:17:14.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.361 "dma_device_type": 2 00:17:14.361 }, 00:17:14.361 { 00:17:14.361 "dma_device_id": "system", 00:17:14.361 "dma_device_type": 1 00:17:14.361 }, 00:17:14.361 { 00:17:14.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.361 "dma_device_type": 2 00:17:14.361 } 00:17:14.361 ], 00:17:14.361 "driver_specific": { 00:17:14.361 "raid": { 00:17:14.361 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:14.361 "strip_size_kb": 0, 00:17:14.361 "state": "online", 00:17:14.361 "raid_level": "raid1", 00:17:14.361 "superblock": true, 00:17:14.361 "num_base_bdevs": 2, 00:17:14.361 "num_base_bdevs_discovered": 2, 00:17:14.361 "num_base_bdevs_operational": 2, 00:17:14.361 "base_bdevs_list": [ 00:17:14.361 { 00:17:14.361 "name": "pt1", 00:17:14.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.361 "is_configured": true, 00:17:14.361 "data_offset": 2048, 00:17:14.361 "data_size": 63488 00:17:14.361 }, 00:17:14.361 { 00:17:14.361 "name": "pt2", 00:17:14.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.361 "is_configured": true, 00:17:14.361 "data_offset": 2048, 00:17:14.361 "data_size": 63488 00:17:14.361 } 00:17:14.361 ] 00:17:14.361 } 00:17:14.361 } 00:17:14.361 }' 00:17:14.361 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.361 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:14.361 pt2' 00:17:14.361 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.361 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:14.361 18:44:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.619 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.619 "name": "pt1", 00:17:14.619 "aliases": [ 00:17:14.619 "00000000-0000-0000-0000-000000000001" 00:17:14.619 ], 00:17:14.619 "product_name": "passthru", 00:17:14.619 "block_size": 512, 00:17:14.619 "num_blocks": 65536, 00:17:14.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.619 "assigned_rate_limits": { 00:17:14.619 "rw_ios_per_sec": 0, 00:17:14.619 "rw_mbytes_per_sec": 0, 00:17:14.619 "r_mbytes_per_sec": 0, 00:17:14.619 "w_mbytes_per_sec": 0 00:17:14.619 }, 00:17:14.619 "claimed": true, 00:17:14.619 "claim_type": "exclusive_write", 00:17:14.619 "zoned": false, 00:17:14.619 "supported_io_types": { 00:17:14.619 "read": true, 00:17:14.619 "write": true, 00:17:14.619 "unmap": true, 00:17:14.619 "flush": true, 00:17:14.619 "reset": true, 00:17:14.619 "nvme_admin": false, 00:17:14.619 "nvme_io": false, 00:17:14.619 "nvme_io_md": false, 00:17:14.619 "write_zeroes": true, 00:17:14.619 "zcopy": true, 00:17:14.619 "get_zone_info": false, 00:17:14.619 "zone_management": false, 00:17:14.619 "zone_append": false, 00:17:14.619 "compare": false, 00:17:14.619 "compare_and_write": false, 00:17:14.619 "abort": true, 00:17:14.619 "seek_hole": false, 00:17:14.619 "seek_data": false, 00:17:14.619 "copy": true, 00:17:14.619 "nvme_iov_md": false 00:17:14.619 }, 00:17:14.619 "memory_domains": [ 00:17:14.619 { 00:17:14.619 "dma_device_id": "system", 00:17:14.619 "dma_device_type": 1 00:17:14.619 }, 00:17:14.619 { 00:17:14.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.620 "dma_device_type": 2 00:17:14.620 } 00:17:14.620 ], 00:17:14.620 "driver_specific": { 00:17:14.620 "passthru": { 00:17:14.620 "name": "pt1", 00:17:14.620 "base_bdev_name": "malloc1" 00:17:14.620 } 00:17:14.620 } 00:17:14.620 }' 00:17:14.620 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.620 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.620 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.620 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.877 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.877 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.877 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.877 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.877 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.877 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.877 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.136 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.136 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.136 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:15.136 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.395 "name": "pt2", 00:17:15.395 "aliases": [ 00:17:15.395 "00000000-0000-0000-0000-000000000002" 00:17:15.395 ], 00:17:15.395 "product_name": "passthru", 00:17:15.395 "block_size": 512, 00:17:15.395 "num_blocks": 65536, 00:17:15.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.395 "assigned_rate_limits": { 00:17:15.395 "rw_ios_per_sec": 0, 00:17:15.395 "rw_mbytes_per_sec": 0, 00:17:15.395 "r_mbytes_per_sec": 0, 00:17:15.395 "w_mbytes_per_sec": 0 00:17:15.395 }, 00:17:15.395 "claimed": true, 00:17:15.395 "claim_type": "exclusive_write", 00:17:15.395 "zoned": false, 00:17:15.395 "supported_io_types": { 00:17:15.395 "read": true, 00:17:15.395 "write": true, 00:17:15.395 "unmap": true, 00:17:15.395 "flush": true, 00:17:15.395 "reset": true, 00:17:15.395 "nvme_admin": false, 00:17:15.395 "nvme_io": false, 00:17:15.395 "nvme_io_md": false, 00:17:15.395 "write_zeroes": true, 00:17:15.395 "zcopy": true, 00:17:15.395 "get_zone_info": false, 00:17:15.395 "zone_management": false, 00:17:15.395 "zone_append": false, 00:17:15.395 "compare": false, 00:17:15.395 "compare_and_write": false, 00:17:15.395 "abort": true, 00:17:15.395 "seek_hole": false, 00:17:15.395 "seek_data": false, 00:17:15.395 "copy": true, 00:17:15.395 "nvme_iov_md": false 00:17:15.395 }, 00:17:15.395 "memory_domains": [ 00:17:15.395 { 00:17:15.395 "dma_device_id": "system", 00:17:15.395 "dma_device_type": 1 00:17:15.395 }, 00:17:15.395 { 00:17:15.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.395 "dma_device_type": 2 00:17:15.395 } 00:17:15.395 ], 00:17:15.395 "driver_specific": { 00:17:15.395 "passthru": { 00:17:15.395 "name": "pt2", 00:17:15.395 "base_bdev_name": "malloc2" 00:17:15.395 } 00:17:15.395 } 00:17:15.395 }' 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.395 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.655 18:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.655 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.655 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.655 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.655 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.655 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:15.655 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:15.915 [2024-07-25 18:44:16.389039] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.915 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 2ed9ea13-6c60-4b8b-b015-224db017329b '!=' 2ed9ea13-6c60-4b8b-b015-224db017329b ']' 00:17:15.915 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:17:15.915 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:15.915 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:15.915 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:16.174 [2024-07-25 18:44:16.640966] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:16.174 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.174 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.175 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.433 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.433 "name": "raid_bdev1", 00:17:16.433 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:16.433 "strip_size_kb": 0, 00:17:16.433 "state": "online", 00:17:16.433 "raid_level": "raid1", 00:17:16.433 "superblock": true, 00:17:16.433 "num_base_bdevs": 2, 00:17:16.433 "num_base_bdevs_discovered": 1, 00:17:16.433 "num_base_bdevs_operational": 1, 00:17:16.433 "base_bdevs_list": [ 00:17:16.433 { 00:17:16.433 "name": null, 00:17:16.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.433 "is_configured": false, 00:17:16.433 "data_offset": 2048, 00:17:16.433 "data_size": 63488 00:17:16.433 }, 00:17:16.433 { 00:17:16.433 "name": "pt2", 00:17:16.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.433 "is_configured": true, 00:17:16.433 "data_offset": 2048, 00:17:16.433 "data_size": 63488 00:17:16.433 } 00:17:16.433 ] 00:17:16.433 }' 00:17:16.433 18:44:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.433 18:44:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.026 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:17.026 [2024-07-25 18:44:17.593125] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.026 [2024-07-25 18:44:17.593282] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.026 [2024-07-25 18:44:17.593515] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.026 [2024-07-25 18:44:17.593658] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.026 [2024-07-25 18:44:17.593730] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:17:17.283 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:17:17.283 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.542 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:17:17.542 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:17:17.542 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:17:17.542 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:17.542 18:44:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:17.542 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:17.542 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:17.542 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:17:17.542 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:17.542 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:17:17.542 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.800 [2024-07-25 18:44:18.205210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.800 [2024-07-25 18:44:18.205496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.800 [2024-07-25 18:44:18.205560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:17.800 [2024-07-25 18:44:18.205656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.800 [2024-07-25 18:44:18.208409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.800 [2024-07-25 18:44:18.208604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.800 [2024-07-25 18:44:18.208817] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:17.800 [2024-07-25 18:44:18.208947] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.800 [2024-07-25 18:44:18.209145] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:17:17.800 [2024-07-25 18:44:18.209225] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:17.800 [2024-07-25 18:44:18.209344] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:17.800 [2024-07-25 18:44:18.209844] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:17:17.800 [2024-07-25 18:44:18.209944] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:17:17.800 [2024-07-25 18:44:18.210209] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.800 pt2 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.800 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.059 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.059 "name": "raid_bdev1", 00:17:18.059 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:18.059 "strip_size_kb": 0, 00:17:18.059 "state": "online", 00:17:18.059 "raid_level": "raid1", 00:17:18.059 "superblock": true, 00:17:18.059 "num_base_bdevs": 2, 00:17:18.059 "num_base_bdevs_discovered": 1, 00:17:18.059 "num_base_bdevs_operational": 1, 00:17:18.059 "base_bdevs_list": [ 00:17:18.059 { 00:17:18.059 "name": null, 00:17:18.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.059 "is_configured": false, 00:17:18.059 "data_offset": 2048, 00:17:18.059 "data_size": 63488 00:17:18.059 }, 00:17:18.059 { 00:17:18.059 "name": "pt2", 00:17:18.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.059 "is_configured": true, 00:17:18.059 "data_offset": 2048, 00:17:18.059 "data_size": 63488 00:17:18.059 } 00:17:18.059 ] 00:17:18.059 }' 00:17:18.059 18:44:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.059 18:44:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.625 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:18.883 [2024-07-25 18:44:19.233417] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.883 [2024-07-25 18:44:19.233629] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.883 [2024-07-25 18:44:19.233851] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.883 [2024-07-25 18:44:19.234005] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.883 [2024-07-25 18:44:19.234085] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:17:18.883 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.883 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:17:18.883 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:17:18.883 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:17:18.883 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:17:18.883 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.142 [2024-07-25 18:44:19.609467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.142 [2024-07-25 18:44:19.609710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.142 [2024-07-25 18:44:19.609801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:19.142 [2024-07-25 18:44:19.609906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.142 [2024-07-25 18:44:19.612639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.142 [2024-07-25 18:44:19.612838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.142 [2024-07-25 18:44:19.613057] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.142 [2024-07-25 18:44:19.613179] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.142 [2024-07-25 18:44:19.613364] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:19.142 [2024-07-25 18:44:19.613445] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.142 [2024-07-25 18:44:19.613488] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:17:19.142 [2024-07-25 18:44:19.613582] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.142 [2024-07-25 18:44:19.613817] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:17:19.142 [2024-07-25 18:44:19.613927] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:19.142 [2024-07-25 18:44:19.614055] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:19.142 [2024-07-25 18:44:19.614490] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:17:19.142 [2024-07-25 18:44:19.614586] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:17:19.142 [2024-07-25 18:44:19.614831] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.142 pt1 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.142 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.400 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.400 "name": "raid_bdev1", 00:17:19.400 "uuid": "2ed9ea13-6c60-4b8b-b015-224db017329b", 00:17:19.400 "strip_size_kb": 0, 00:17:19.400 "state": "online", 00:17:19.400 "raid_level": "raid1", 00:17:19.400 "superblock": true, 00:17:19.400 "num_base_bdevs": 2, 00:17:19.400 "num_base_bdevs_discovered": 1, 00:17:19.400 "num_base_bdevs_operational": 1, 00:17:19.400 "base_bdevs_list": [ 00:17:19.400 { 00:17:19.400 "name": null, 00:17:19.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.401 "is_configured": false, 00:17:19.401 "data_offset": 2048, 00:17:19.401 "data_size": 63488 00:17:19.401 }, 00:17:19.401 { 00:17:19.401 "name": "pt2", 00:17:19.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.401 "is_configured": true, 00:17:19.401 "data_offset": 2048, 00:17:19.401 "data_size": 63488 00:17:19.401 } 00:17:19.401 ] 00:17:19.401 }' 00:17:19.401 18:44:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.401 18:44:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.968 18:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:19.968 18:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:20.226 18:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:17:20.226 18:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:20.226 18:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:17:20.485 [2024-07-25 18:44:20.910403] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 2ed9ea13-6c60-4b8b-b015-224db017329b '!=' 2ed9ea13-6c60-4b8b-b015-224db017329b ']' 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 123826 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 123826 ']' 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 123826 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 123826 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 123826' 00:17:20.485 killing process with pid 123826 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 123826 00:17:20.485 [2024-07-25 18:44:20.965286] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.485 [2024-07-25 18:44:20.965368] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.485 18:44:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 123826 00:17:20.485 [2024-07-25 18:44:20.965424] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.485 [2024-07-25 18:44:20.965432] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:17:20.744 [2024-07-25 18:44:21.129864] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.122 18:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:17:22.122 00:17:22.122 real 0m15.899s 00:17:22.122 user 0m27.671s 00:17:22.122 sys 0m2.798s 00:17:22.122 18:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.122 18:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.122 ************************************ 00:17:22.122 END TEST raid_superblock_test 00:17:22.122 ************************************ 00:17:22.122 18:44:22 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:17:22.122 18:44:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:22.122 18:44:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.122 18:44:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.122 ************************************ 00:17:22.122 START TEST raid_read_error_test 00:17:22.122 ************************************ 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.TSe2h29DEW 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=124353 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 124353 /var/tmp/spdk-raid.sock 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 124353 ']' 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.122 18:44:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.122 [2024-07-25 18:44:22.492949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:22.122 [2024-07-25 18:44:22.493679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124353 ] 00:17:22.122 [2024-07-25 18:44:22.678691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.381 [2024-07-25 18:44:22.920187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.639 [2024-07-25 18:44:23.192924] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.898 18:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.898 18:44:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:22.898 18:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:22.898 18:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:23.166 BaseBdev1_malloc 00:17:23.424 18:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:23.424 true 00:17:23.424 18:44:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:23.682 [2024-07-25 18:44:24.154800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:23.682 [2024-07-25 18:44:24.155146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.682 [2024-07-25 18:44:24.155290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:23.682 [2024-07-25 18:44:24.155402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.682 [2024-07-25 18:44:24.158212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.682 [2024-07-25 18:44:24.158371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.682 BaseBdev1 00:17:23.682 18:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:23.682 18:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:23.940 BaseBdev2_malloc 00:17:23.940 18:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:24.198 true 00:17:24.198 18:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:24.456 [2024-07-25 18:44:24.829163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:24.456 [2024-07-25 18:44:24.829452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.456 [2024-07-25 18:44:24.829550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:24.456 [2024-07-25 18:44:24.829656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.456 [2024-07-25 18:44:24.832346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.456 [2024-07-25 18:44:24.832516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:24.456 BaseBdev2 00:17:24.456 18:44:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:24.715 [2024-07-25 18:44:25.065256] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.715 [2024-07-25 18:44:25.067738] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.715 [2024-07-25 18:44:25.068094] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:24.715 [2024-07-25 18:44:25.068227] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:24.715 [2024-07-25 18:44:25.068432] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:24.715 [2024-07-25 18:44:25.068968] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:24.715 [2024-07-25 18:44:25.069075] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:17:24.715 [2024-07-25 18:44:25.069425] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.715 "name": "raid_bdev1", 00:17:24.715 "uuid": "6ff7e710-0eca-434e-8db7-27d741e029b0", 00:17:24.715 "strip_size_kb": 0, 00:17:24.715 "state": "online", 00:17:24.715 "raid_level": "raid1", 00:17:24.715 "superblock": true, 00:17:24.715 "num_base_bdevs": 2, 00:17:24.715 "num_base_bdevs_discovered": 2, 00:17:24.715 "num_base_bdevs_operational": 2, 00:17:24.715 "base_bdevs_list": [ 00:17:24.715 { 00:17:24.715 "name": "BaseBdev1", 00:17:24.715 "uuid": "a36e3f32-fc79-5c75-ae7f-4cf20f3c43a3", 00:17:24.715 "is_configured": true, 00:17:24.715 "data_offset": 2048, 00:17:24.715 "data_size": 63488 00:17:24.715 }, 00:17:24.715 { 00:17:24.715 "name": "BaseBdev2", 00:17:24.715 "uuid": "df6a0163-26d3-518f-8352-b99edbcd4a54", 00:17:24.715 "is_configured": true, 00:17:24.715 "data_offset": 2048, 00:17:24.715 "data_size": 63488 00:17:24.715 } 00:17:24.715 ] 00:17:24.715 }' 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.715 18:44:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.282 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:25.282 18:44:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:25.541 [2024-07-25 18:44:25.907107] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:26.477 18:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.735 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.994 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.994 "name": "raid_bdev1", 00:17:26.994 "uuid": "6ff7e710-0eca-434e-8db7-27d741e029b0", 00:17:26.994 "strip_size_kb": 0, 00:17:26.994 "state": "online", 00:17:26.994 "raid_level": "raid1", 00:17:26.994 "superblock": true, 00:17:26.994 "num_base_bdevs": 2, 00:17:26.994 "num_base_bdevs_discovered": 2, 00:17:26.994 "num_base_bdevs_operational": 2, 00:17:26.994 "base_bdevs_list": [ 00:17:26.994 { 00:17:26.994 "name": "BaseBdev1", 00:17:26.994 "uuid": "a36e3f32-fc79-5c75-ae7f-4cf20f3c43a3", 00:17:26.994 "is_configured": true, 00:17:26.994 "data_offset": 2048, 00:17:26.994 "data_size": 63488 00:17:26.994 }, 00:17:26.994 { 00:17:26.994 "name": "BaseBdev2", 00:17:26.994 "uuid": "df6a0163-26d3-518f-8352-b99edbcd4a54", 00:17:26.994 "is_configured": true, 00:17:26.994 "data_offset": 2048, 00:17:26.994 "data_size": 63488 00:17:26.994 } 00:17:26.994 ] 00:17:26.994 }' 00:17:26.994 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.994 18:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.561 18:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:27.820 [2024-07-25 18:44:28.140001] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.820 [2024-07-25 18:44:28.140308] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.820 [2024-07-25 18:44:28.143015] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.820 [2024-07-25 18:44:28.143183] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.820 [2024-07-25 18:44:28.143297] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.820 [2024-07-25 18:44:28.143387] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:17:27.820 0 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 124353 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 124353 ']' 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 124353 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124353 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124353' 00:17:27.820 killing process with pid 124353 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 124353 00:17:27.820 [2024-07-25 18:44:28.188936] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.820 18:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 124353 00:17:27.820 [2024-07-25 18:44:28.331637] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.TSe2h29DEW 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:29.721 ************************************ 00:17:29.721 END TEST raid_read_error_test 00:17:29.721 ************************************ 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:29.721 00:17:29.721 real 0m7.457s 00:17:29.721 user 0m10.557s 00:17:29.721 sys 0m1.157s 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.721 18:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.721 18:44:29 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:17:29.721 18:44:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:29.721 18:44:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.721 18:44:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.721 ************************************ 00:17:29.721 START TEST raid_write_error_test 00:17:29.721 ************************************ 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.vzpUK85elm 00:17:29.721 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=124554 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 124554 /var/tmp/spdk-raid.sock 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 124554 ']' 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.722 18:44:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.722 [2024-07-25 18:44:30.021124] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:29.722 [2024-07-25 18:44:30.021539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124554 ] 00:17:29.722 [2024-07-25 18:44:30.200294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.980 [2024-07-25 18:44:30.444280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.239 [2024-07-25 18:44:30.713331] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.497 18:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.497 18:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:30.497 18:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:30.497 18:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:30.756 BaseBdev1_malloc 00:17:30.756 18:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:31.014 true 00:17:31.014 18:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:31.273 [2024-07-25 18:44:31.740655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:31.273 [2024-07-25 18:44:31.740894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.273 [2024-07-25 18:44:31.740983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:31.273 [2024-07-25 18:44:31.741086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.273 [2024-07-25 18:44:31.743698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.273 [2024-07-25 18:44:31.743860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:31.273 BaseBdev1 00:17:31.273 18:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:31.273 18:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:31.532 BaseBdev2_malloc 00:17:31.532 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:31.790 true 00:17:31.790 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:31.790 [2024-07-25 18:44:32.356771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:31.790 [2024-07-25 18:44:32.357131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.790 [2024-07-25 18:44:32.357230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:31.790 [2024-07-25 18:44:32.357469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.790 [2024-07-25 18:44:32.360158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.790 [2024-07-25 18:44:32.360327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:31.790 BaseBdev2 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:32.049 [2024-07-25 18:44:32.541033] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.049 [2024-07-25 18:44:32.543242] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.049 [2024-07-25 18:44:32.543600] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:32.049 [2024-07-25 18:44:32.543711] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:32.049 [2024-07-25 18:44:32.543878] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:32.049 [2024-07-25 18:44:32.544385] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:32.049 [2024-07-25 18:44:32.544484] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:17:32.049 [2024-07-25 18:44:32.544740] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.049 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.308 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.308 "name": "raid_bdev1", 00:17:32.308 "uuid": "856ff5fc-95e5-445c-bc6e-f10aee8d36df", 00:17:32.308 "strip_size_kb": 0, 00:17:32.308 "state": "online", 00:17:32.308 "raid_level": "raid1", 00:17:32.308 "superblock": true, 00:17:32.308 "num_base_bdevs": 2, 00:17:32.308 "num_base_bdevs_discovered": 2, 00:17:32.308 "num_base_bdevs_operational": 2, 00:17:32.308 "base_bdevs_list": [ 00:17:32.308 { 00:17:32.308 "name": "BaseBdev1", 00:17:32.308 "uuid": "48e6044e-9998-5ec6-acf1-3313137fc47a", 00:17:32.308 "is_configured": true, 00:17:32.308 "data_offset": 2048, 00:17:32.308 "data_size": 63488 00:17:32.308 }, 00:17:32.308 { 00:17:32.308 "name": "BaseBdev2", 00:17:32.308 "uuid": "9dc20306-4914-513a-8dca-99ed82e9c0be", 00:17:32.308 "is_configured": true, 00:17:32.308 "data_offset": 2048, 00:17:32.308 "data_size": 63488 00:17:32.308 } 00:17:32.308 ] 00:17:32.308 }' 00:17:32.308 18:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.308 18:44:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.929 18:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:32.929 18:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:33.187 [2024-07-25 18:44:33.514810] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:34.121 [2024-07-25 18:44:34.629994] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:34.121 [2024-07-25 18:44:34.630380] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.121 [2024-07-25 18:44:34.630682] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.121 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.379 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.379 "name": "raid_bdev1", 00:17:34.379 "uuid": "856ff5fc-95e5-445c-bc6e-f10aee8d36df", 00:17:34.379 "strip_size_kb": 0, 00:17:34.379 "state": "online", 00:17:34.379 "raid_level": "raid1", 00:17:34.379 "superblock": true, 00:17:34.379 "num_base_bdevs": 2, 00:17:34.379 "num_base_bdevs_discovered": 1, 00:17:34.379 "num_base_bdevs_operational": 1, 00:17:34.379 "base_bdevs_list": [ 00:17:34.379 { 00:17:34.379 "name": null, 00:17:34.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.379 "is_configured": false, 00:17:34.379 "data_offset": 2048, 00:17:34.379 "data_size": 63488 00:17:34.379 }, 00:17:34.379 { 00:17:34.379 "name": "BaseBdev2", 00:17:34.379 "uuid": "9dc20306-4914-513a-8dca-99ed82e9c0be", 00:17:34.379 "is_configured": true, 00:17:34.379 "data_offset": 2048, 00:17:34.379 "data_size": 63488 00:17:34.379 } 00:17:34.379 ] 00:17:34.379 }' 00:17:34.379 18:44:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.379 18:44:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.947 18:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:35.206 [2024-07-25 18:44:35.691030] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.206 [2024-07-25 18:44:35.691331] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.206 [2024-07-25 18:44:35.693970] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.206 [2024-07-25 18:44:35.694152] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.207 [2024-07-25 18:44:35.694237] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.207 [2024-07-25 18:44:35.694311] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:17:35.207 0 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 124554 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 124554 ']' 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 124554 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124554 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124554' 00:17:35.207 killing process with pid 124554 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 124554 00:17:35.207 [2024-07-25 18:44:35.761024] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:35.207 18:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 124554 00:17:35.466 [2024-07-25 18:44:35.906893] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.vzpUK85elm 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:37.369 00:17:37.369 real 0m7.525s 00:17:37.369 user 0m10.755s 00:17:37.369 sys 0m1.076s 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.369 18:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.369 ************************************ 00:17:37.369 END TEST raid_write_error_test 00:17:37.369 ************************************ 00:17:37.369 18:44:37 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:17:37.369 18:44:37 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:17:37.369 18:44:37 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:17:37.369 18:44:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:37.369 18:44:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.369 18:44:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.369 ************************************ 00:17:37.369 START TEST raid_state_function_test 00:17:37.369 ************************************ 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=124749 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 124749' 00:17:37.369 Process raid pid: 124749 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 124749 /var/tmp/spdk-raid.sock 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 124749 ']' 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.369 18:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.369 [2024-07-25 18:44:37.613811] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:37.369 [2024-07-25 18:44:37.614241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.369 [2024-07-25 18:44:37.795685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.628 [2024-07-25 18:44:37.992881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.628 [2024-07-25 18:44:38.189426] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.195 18:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.195 18:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:38.195 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:38.452 [2024-07-25 18:44:38.789519] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.452 [2024-07-25 18:44:38.789826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.452 [2024-07-25 18:44:38.789935] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.452 [2024-07-25 18:44:38.790039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.452 [2024-07-25 18:44:38.790107] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.452 [2024-07-25 18:44:38.790154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.452 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:38.452 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.452 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.452 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:38.452 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.452 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:38.453 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.453 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.453 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.453 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.453 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.453 18:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.710 18:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.710 "name": "Existed_Raid", 00:17:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.710 "strip_size_kb": 64, 00:17:38.710 "state": "configuring", 00:17:38.710 "raid_level": "raid0", 00:17:38.710 "superblock": false, 00:17:38.710 "num_base_bdevs": 3, 00:17:38.710 "num_base_bdevs_discovered": 0, 00:17:38.710 "num_base_bdevs_operational": 3, 00:17:38.710 "base_bdevs_list": [ 00:17:38.710 { 00:17:38.710 "name": "BaseBdev1", 00:17:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.710 "is_configured": false, 00:17:38.710 "data_offset": 0, 00:17:38.710 "data_size": 0 00:17:38.710 }, 00:17:38.710 { 00:17:38.710 "name": "BaseBdev2", 00:17:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.710 "is_configured": false, 00:17:38.710 "data_offset": 0, 00:17:38.710 "data_size": 0 00:17:38.710 }, 00:17:38.710 { 00:17:38.710 "name": "BaseBdev3", 00:17:38.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.710 "is_configured": false, 00:17:38.710 "data_offset": 0, 00:17:38.710 "data_size": 0 00:17:38.710 } 00:17:38.710 ] 00:17:38.710 }' 00:17:38.710 18:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.710 18:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.276 18:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:39.276 [2024-07-25 18:44:39.821639] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:39.276 [2024-07-25 18:44:39.821886] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:17:39.276 18:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:39.533 [2024-07-25 18:44:39.993666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.533 [2024-07-25 18:44:39.993901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.533 [2024-07-25 18:44:39.994006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.533 [2024-07-25 18:44:39.994096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.533 [2024-07-25 18:44:39.994210] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:39.533 [2024-07-25 18:44:39.994268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:39.533 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:39.791 [2024-07-25 18:44:40.208167] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.791 BaseBdev1 00:17:39.791 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:39.791 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:39.791 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:39.791 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:39.791 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:39.791 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:39.791 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.049 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.049 [ 00:17:40.049 { 00:17:40.049 "name": "BaseBdev1", 00:17:40.049 "aliases": [ 00:17:40.049 "12623b89-ba9e-47d6-a884-e2397be8a1bd" 00:17:40.049 ], 00:17:40.049 "product_name": "Malloc disk", 00:17:40.049 "block_size": 512, 00:17:40.049 "num_blocks": 65536, 00:17:40.049 "uuid": "12623b89-ba9e-47d6-a884-e2397be8a1bd", 00:17:40.049 "assigned_rate_limits": { 00:17:40.049 "rw_ios_per_sec": 0, 00:17:40.049 "rw_mbytes_per_sec": 0, 00:17:40.049 "r_mbytes_per_sec": 0, 00:17:40.049 "w_mbytes_per_sec": 0 00:17:40.049 }, 00:17:40.050 "claimed": true, 00:17:40.050 "claim_type": "exclusive_write", 00:17:40.050 "zoned": false, 00:17:40.050 "supported_io_types": { 00:17:40.050 "read": true, 00:17:40.050 "write": true, 00:17:40.050 "unmap": true, 00:17:40.050 "flush": true, 00:17:40.050 "reset": true, 00:17:40.050 "nvme_admin": false, 00:17:40.050 "nvme_io": false, 00:17:40.050 "nvme_io_md": false, 00:17:40.050 "write_zeroes": true, 00:17:40.050 "zcopy": true, 00:17:40.050 "get_zone_info": false, 00:17:40.050 "zone_management": false, 00:17:40.050 "zone_append": false, 00:17:40.050 "compare": false, 00:17:40.050 "compare_and_write": false, 00:17:40.050 "abort": true, 00:17:40.050 "seek_hole": false, 00:17:40.050 "seek_data": false, 00:17:40.050 "copy": true, 00:17:40.050 "nvme_iov_md": false 00:17:40.050 }, 00:17:40.050 "memory_domains": [ 00:17:40.050 { 00:17:40.050 "dma_device_id": "system", 00:17:40.050 "dma_device_type": 1 00:17:40.050 }, 00:17:40.050 { 00:17:40.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.050 "dma_device_type": 2 00:17:40.050 } 00:17:40.050 ], 00:17:40.050 "driver_specific": {} 00:17:40.050 } 00:17:40.050 ] 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.050 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.308 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.308 "name": "Existed_Raid", 00:17:40.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.308 "strip_size_kb": 64, 00:17:40.308 "state": "configuring", 00:17:40.308 "raid_level": "raid0", 00:17:40.308 "superblock": false, 00:17:40.308 "num_base_bdevs": 3, 00:17:40.308 "num_base_bdevs_discovered": 1, 00:17:40.308 "num_base_bdevs_operational": 3, 00:17:40.308 "base_bdevs_list": [ 00:17:40.308 { 00:17:40.308 "name": "BaseBdev1", 00:17:40.308 "uuid": "12623b89-ba9e-47d6-a884-e2397be8a1bd", 00:17:40.308 "is_configured": true, 00:17:40.308 "data_offset": 0, 00:17:40.308 "data_size": 65536 00:17:40.308 }, 00:17:40.308 { 00:17:40.308 "name": "BaseBdev2", 00:17:40.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.308 "is_configured": false, 00:17:40.308 "data_offset": 0, 00:17:40.308 "data_size": 0 00:17:40.308 }, 00:17:40.308 { 00:17:40.308 "name": "BaseBdev3", 00:17:40.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.308 "is_configured": false, 00:17:40.308 "data_offset": 0, 00:17:40.308 "data_size": 0 00:17:40.308 } 00:17:40.308 ] 00:17:40.308 }' 00:17:40.308 18:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.308 18:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.875 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:41.134 [2024-07-25 18:44:41.628496] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:41.134 [2024-07-25 18:44:41.628741] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:17:41.134 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:41.392 [2024-07-25 18:44:41.900595] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.392 [2024-07-25 18:44:41.903086] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.392 [2024-07-25 18:44:41.903289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.392 [2024-07-25 18:44:41.903372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.392 [2024-07-25 18:44:41.903449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.392 18:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.651 18:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.651 "name": "Existed_Raid", 00:17:41.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.651 "strip_size_kb": 64, 00:17:41.651 "state": "configuring", 00:17:41.651 "raid_level": "raid0", 00:17:41.651 "superblock": false, 00:17:41.651 "num_base_bdevs": 3, 00:17:41.651 "num_base_bdevs_discovered": 1, 00:17:41.651 "num_base_bdevs_operational": 3, 00:17:41.651 "base_bdevs_list": [ 00:17:41.651 { 00:17:41.651 "name": "BaseBdev1", 00:17:41.651 "uuid": "12623b89-ba9e-47d6-a884-e2397be8a1bd", 00:17:41.651 "is_configured": true, 00:17:41.651 "data_offset": 0, 00:17:41.651 "data_size": 65536 00:17:41.651 }, 00:17:41.651 { 00:17:41.651 "name": "BaseBdev2", 00:17:41.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.651 "is_configured": false, 00:17:41.651 "data_offset": 0, 00:17:41.651 "data_size": 0 00:17:41.651 }, 00:17:41.651 { 00:17:41.651 "name": "BaseBdev3", 00:17:41.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.651 "is_configured": false, 00:17:41.652 "data_offset": 0, 00:17:41.652 "data_size": 0 00:17:41.652 } 00:17:41.652 ] 00:17:41.652 }' 00:17:41.652 18:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.652 18:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.587 18:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:42.587 [2024-07-25 18:44:43.158217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.587 BaseBdev2 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.845 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.103 [ 00:17:43.103 { 00:17:43.103 "name": "BaseBdev2", 00:17:43.103 "aliases": [ 00:17:43.103 "ebf19a16-c176-4dc4-9278-8f0f81475b53" 00:17:43.103 ], 00:17:43.103 "product_name": "Malloc disk", 00:17:43.103 "block_size": 512, 00:17:43.103 "num_blocks": 65536, 00:17:43.103 "uuid": "ebf19a16-c176-4dc4-9278-8f0f81475b53", 00:17:43.103 "assigned_rate_limits": { 00:17:43.103 "rw_ios_per_sec": 0, 00:17:43.103 "rw_mbytes_per_sec": 0, 00:17:43.103 "r_mbytes_per_sec": 0, 00:17:43.103 "w_mbytes_per_sec": 0 00:17:43.103 }, 00:17:43.103 "claimed": true, 00:17:43.103 "claim_type": "exclusive_write", 00:17:43.103 "zoned": false, 00:17:43.103 "supported_io_types": { 00:17:43.103 "read": true, 00:17:43.103 "write": true, 00:17:43.103 "unmap": true, 00:17:43.103 "flush": true, 00:17:43.103 "reset": true, 00:17:43.103 "nvme_admin": false, 00:17:43.103 "nvme_io": false, 00:17:43.103 "nvme_io_md": false, 00:17:43.103 "write_zeroes": true, 00:17:43.103 "zcopy": true, 00:17:43.103 "get_zone_info": false, 00:17:43.103 "zone_management": false, 00:17:43.103 "zone_append": false, 00:17:43.103 "compare": false, 00:17:43.103 "compare_and_write": false, 00:17:43.103 "abort": true, 00:17:43.103 "seek_hole": false, 00:17:43.103 "seek_data": false, 00:17:43.103 "copy": true, 00:17:43.103 "nvme_iov_md": false 00:17:43.103 }, 00:17:43.103 "memory_domains": [ 00:17:43.103 { 00:17:43.103 "dma_device_id": "system", 00:17:43.103 "dma_device_type": 1 00:17:43.103 }, 00:17:43.103 { 00:17:43.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.103 "dma_device_type": 2 00:17:43.103 } 00:17:43.103 ], 00:17:43.103 "driver_specific": {} 00:17:43.103 } 00:17:43.103 ] 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.103 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.361 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.361 "name": "Existed_Raid", 00:17:43.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.361 "strip_size_kb": 64, 00:17:43.361 "state": "configuring", 00:17:43.361 "raid_level": "raid0", 00:17:43.361 "superblock": false, 00:17:43.361 "num_base_bdevs": 3, 00:17:43.361 "num_base_bdevs_discovered": 2, 00:17:43.361 "num_base_bdevs_operational": 3, 00:17:43.361 "base_bdevs_list": [ 00:17:43.361 { 00:17:43.361 "name": "BaseBdev1", 00:17:43.361 "uuid": "12623b89-ba9e-47d6-a884-e2397be8a1bd", 00:17:43.361 "is_configured": true, 00:17:43.361 "data_offset": 0, 00:17:43.361 "data_size": 65536 00:17:43.361 }, 00:17:43.361 { 00:17:43.361 "name": "BaseBdev2", 00:17:43.361 "uuid": "ebf19a16-c176-4dc4-9278-8f0f81475b53", 00:17:43.361 "is_configured": true, 00:17:43.361 "data_offset": 0, 00:17:43.361 "data_size": 65536 00:17:43.361 }, 00:17:43.361 { 00:17:43.361 "name": "BaseBdev3", 00:17:43.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.361 "is_configured": false, 00:17:43.361 "data_offset": 0, 00:17:43.361 "data_size": 0 00:17:43.361 } 00:17:43.361 ] 00:17:43.361 }' 00:17:43.361 18:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.361 18:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:43.928 [2024-07-25 18:44:44.482804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:43.928 [2024-07-25 18:44:44.483121] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:17:43.928 [2024-07-25 18:44:44.483162] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:43.928 [2024-07-25 18:44:44.483390] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:43.928 [2024-07-25 18:44:44.483882] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:17:43.928 [2024-07-25 18:44:44.483991] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:17:43.928 [2024-07-25 18:44:44.484333] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.928 BaseBdev3 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:43.928 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.186 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:44.445 [ 00:17:44.445 { 00:17:44.445 "name": "BaseBdev3", 00:17:44.445 "aliases": [ 00:17:44.445 "24b6f6d8-9a4b-46bd-9637-e3aca88d490f" 00:17:44.445 ], 00:17:44.445 "product_name": "Malloc disk", 00:17:44.445 "block_size": 512, 00:17:44.445 "num_blocks": 65536, 00:17:44.445 "uuid": "24b6f6d8-9a4b-46bd-9637-e3aca88d490f", 00:17:44.445 "assigned_rate_limits": { 00:17:44.445 "rw_ios_per_sec": 0, 00:17:44.445 "rw_mbytes_per_sec": 0, 00:17:44.445 "r_mbytes_per_sec": 0, 00:17:44.445 "w_mbytes_per_sec": 0 00:17:44.445 }, 00:17:44.445 "claimed": true, 00:17:44.445 "claim_type": "exclusive_write", 00:17:44.445 "zoned": false, 00:17:44.445 "supported_io_types": { 00:17:44.445 "read": true, 00:17:44.445 "write": true, 00:17:44.445 "unmap": true, 00:17:44.445 "flush": true, 00:17:44.445 "reset": true, 00:17:44.445 "nvme_admin": false, 00:17:44.445 "nvme_io": false, 00:17:44.445 "nvme_io_md": false, 00:17:44.445 "write_zeroes": true, 00:17:44.445 "zcopy": true, 00:17:44.445 "get_zone_info": false, 00:17:44.445 "zone_management": false, 00:17:44.445 "zone_append": false, 00:17:44.445 "compare": false, 00:17:44.445 "compare_and_write": false, 00:17:44.445 "abort": true, 00:17:44.445 "seek_hole": false, 00:17:44.445 "seek_data": false, 00:17:44.445 "copy": true, 00:17:44.445 "nvme_iov_md": false 00:17:44.445 }, 00:17:44.445 "memory_domains": [ 00:17:44.445 { 00:17:44.446 "dma_device_id": "system", 00:17:44.446 "dma_device_type": 1 00:17:44.446 }, 00:17:44.446 { 00:17:44.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.446 "dma_device_type": 2 00:17:44.446 } 00:17:44.446 ], 00:17:44.446 "driver_specific": {} 00:17:44.446 } 00:17:44.446 ] 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.446 18:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.704 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.704 "name": "Existed_Raid", 00:17:44.704 "uuid": "0d4b63b9-eafd-4da8-8af5-3ee0d2ec961d", 00:17:44.704 "strip_size_kb": 64, 00:17:44.704 "state": "online", 00:17:44.704 "raid_level": "raid0", 00:17:44.704 "superblock": false, 00:17:44.704 "num_base_bdevs": 3, 00:17:44.704 "num_base_bdevs_discovered": 3, 00:17:44.704 "num_base_bdevs_operational": 3, 00:17:44.704 "base_bdevs_list": [ 00:17:44.704 { 00:17:44.704 "name": "BaseBdev1", 00:17:44.704 "uuid": "12623b89-ba9e-47d6-a884-e2397be8a1bd", 00:17:44.704 "is_configured": true, 00:17:44.704 "data_offset": 0, 00:17:44.704 "data_size": 65536 00:17:44.704 }, 00:17:44.704 { 00:17:44.704 "name": "BaseBdev2", 00:17:44.704 "uuid": "ebf19a16-c176-4dc4-9278-8f0f81475b53", 00:17:44.704 "is_configured": true, 00:17:44.704 "data_offset": 0, 00:17:44.704 "data_size": 65536 00:17:44.704 }, 00:17:44.704 { 00:17:44.704 "name": "BaseBdev3", 00:17:44.704 "uuid": "24b6f6d8-9a4b-46bd-9637-e3aca88d490f", 00:17:44.704 "is_configured": true, 00:17:44.704 "data_offset": 0, 00:17:44.704 "data_size": 65536 00:17:44.704 } 00:17:44.704 ] 00:17:44.704 }' 00:17:44.704 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.704 18:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:45.271 [2024-07-25 18:44:45.739480] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:45.271 "name": "Existed_Raid", 00:17:45.271 "aliases": [ 00:17:45.271 "0d4b63b9-eafd-4da8-8af5-3ee0d2ec961d" 00:17:45.271 ], 00:17:45.271 "product_name": "Raid Volume", 00:17:45.271 "block_size": 512, 00:17:45.271 "num_blocks": 196608, 00:17:45.271 "uuid": "0d4b63b9-eafd-4da8-8af5-3ee0d2ec961d", 00:17:45.271 "assigned_rate_limits": { 00:17:45.271 "rw_ios_per_sec": 0, 00:17:45.271 "rw_mbytes_per_sec": 0, 00:17:45.271 "r_mbytes_per_sec": 0, 00:17:45.271 "w_mbytes_per_sec": 0 00:17:45.271 }, 00:17:45.271 "claimed": false, 00:17:45.271 "zoned": false, 00:17:45.271 "supported_io_types": { 00:17:45.271 "read": true, 00:17:45.271 "write": true, 00:17:45.271 "unmap": true, 00:17:45.271 "flush": true, 00:17:45.271 "reset": true, 00:17:45.271 "nvme_admin": false, 00:17:45.271 "nvme_io": false, 00:17:45.271 "nvme_io_md": false, 00:17:45.271 "write_zeroes": true, 00:17:45.271 "zcopy": false, 00:17:45.271 "get_zone_info": false, 00:17:45.271 "zone_management": false, 00:17:45.271 "zone_append": false, 00:17:45.271 "compare": false, 00:17:45.271 "compare_and_write": false, 00:17:45.271 "abort": false, 00:17:45.271 "seek_hole": false, 00:17:45.271 "seek_data": false, 00:17:45.271 "copy": false, 00:17:45.271 "nvme_iov_md": false 00:17:45.271 }, 00:17:45.271 "memory_domains": [ 00:17:45.271 { 00:17:45.271 "dma_device_id": "system", 00:17:45.271 "dma_device_type": 1 00:17:45.271 }, 00:17:45.271 { 00:17:45.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.271 "dma_device_type": 2 00:17:45.271 }, 00:17:45.271 { 00:17:45.271 "dma_device_id": "system", 00:17:45.271 "dma_device_type": 1 00:17:45.271 }, 00:17:45.271 { 00:17:45.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.271 "dma_device_type": 2 00:17:45.271 }, 00:17:45.271 { 00:17:45.271 "dma_device_id": "system", 00:17:45.271 "dma_device_type": 1 00:17:45.271 }, 00:17:45.271 { 00:17:45.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.271 "dma_device_type": 2 00:17:45.271 } 00:17:45.271 ], 00:17:45.271 "driver_specific": { 00:17:45.271 "raid": { 00:17:45.271 "uuid": "0d4b63b9-eafd-4da8-8af5-3ee0d2ec961d", 00:17:45.271 "strip_size_kb": 64, 00:17:45.271 "state": "online", 00:17:45.271 "raid_level": "raid0", 00:17:45.271 "superblock": false, 00:17:45.271 "num_base_bdevs": 3, 00:17:45.271 "num_base_bdevs_discovered": 3, 00:17:45.271 "num_base_bdevs_operational": 3, 00:17:45.271 "base_bdevs_list": [ 00:17:45.271 { 00:17:45.271 "name": "BaseBdev1", 00:17:45.271 "uuid": "12623b89-ba9e-47d6-a884-e2397be8a1bd", 00:17:45.271 "is_configured": true, 00:17:45.271 "data_offset": 0, 00:17:45.271 "data_size": 65536 00:17:45.271 }, 00:17:45.271 { 00:17:45.271 "name": "BaseBdev2", 00:17:45.271 "uuid": "ebf19a16-c176-4dc4-9278-8f0f81475b53", 00:17:45.271 "is_configured": true, 00:17:45.271 "data_offset": 0, 00:17:45.271 "data_size": 65536 00:17:45.271 }, 00:17:45.271 { 00:17:45.271 "name": "BaseBdev3", 00:17:45.271 "uuid": "24b6f6d8-9a4b-46bd-9637-e3aca88d490f", 00:17:45.271 "is_configured": true, 00:17:45.271 "data_offset": 0, 00:17:45.271 "data_size": 65536 00:17:45.271 } 00:17:45.271 ] 00:17:45.271 } 00:17:45.271 } 00:17:45.271 }' 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:45.271 BaseBdev2 00:17:45.271 BaseBdev3' 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:45.271 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:45.530 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:45.530 "name": "BaseBdev1", 00:17:45.530 "aliases": [ 00:17:45.530 "12623b89-ba9e-47d6-a884-e2397be8a1bd" 00:17:45.530 ], 00:17:45.530 "product_name": "Malloc disk", 00:17:45.530 "block_size": 512, 00:17:45.530 "num_blocks": 65536, 00:17:45.530 "uuid": "12623b89-ba9e-47d6-a884-e2397be8a1bd", 00:17:45.530 "assigned_rate_limits": { 00:17:45.530 "rw_ios_per_sec": 0, 00:17:45.530 "rw_mbytes_per_sec": 0, 00:17:45.530 "r_mbytes_per_sec": 0, 00:17:45.530 "w_mbytes_per_sec": 0 00:17:45.530 }, 00:17:45.530 "claimed": true, 00:17:45.530 "claim_type": "exclusive_write", 00:17:45.530 "zoned": false, 00:17:45.530 "supported_io_types": { 00:17:45.530 "read": true, 00:17:45.530 "write": true, 00:17:45.530 "unmap": true, 00:17:45.530 "flush": true, 00:17:45.530 "reset": true, 00:17:45.530 "nvme_admin": false, 00:17:45.530 "nvme_io": false, 00:17:45.530 "nvme_io_md": false, 00:17:45.530 "write_zeroes": true, 00:17:45.530 "zcopy": true, 00:17:45.530 "get_zone_info": false, 00:17:45.530 "zone_management": false, 00:17:45.530 "zone_append": false, 00:17:45.530 "compare": false, 00:17:45.530 "compare_and_write": false, 00:17:45.530 "abort": true, 00:17:45.530 "seek_hole": false, 00:17:45.530 "seek_data": false, 00:17:45.530 "copy": true, 00:17:45.530 "nvme_iov_md": false 00:17:45.530 }, 00:17:45.530 "memory_domains": [ 00:17:45.530 { 00:17:45.530 "dma_device_id": "system", 00:17:45.530 "dma_device_type": 1 00:17:45.530 }, 00:17:45.530 { 00:17:45.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.530 "dma_device_type": 2 00:17:45.530 } 00:17:45.530 ], 00:17:45.530 "driver_specific": {} 00:17:45.530 }' 00:17:45.530 18:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.530 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.530 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:45.530 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:45.789 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.048 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.048 "name": "BaseBdev2", 00:17:46.048 "aliases": [ 00:17:46.048 "ebf19a16-c176-4dc4-9278-8f0f81475b53" 00:17:46.048 ], 00:17:46.048 "product_name": "Malloc disk", 00:17:46.048 "block_size": 512, 00:17:46.048 "num_blocks": 65536, 00:17:46.048 "uuid": "ebf19a16-c176-4dc4-9278-8f0f81475b53", 00:17:46.048 "assigned_rate_limits": { 00:17:46.048 "rw_ios_per_sec": 0, 00:17:46.048 "rw_mbytes_per_sec": 0, 00:17:46.048 "r_mbytes_per_sec": 0, 00:17:46.048 "w_mbytes_per_sec": 0 00:17:46.048 }, 00:17:46.048 "claimed": true, 00:17:46.048 "claim_type": "exclusive_write", 00:17:46.048 "zoned": false, 00:17:46.048 "supported_io_types": { 00:17:46.048 "read": true, 00:17:46.048 "write": true, 00:17:46.048 "unmap": true, 00:17:46.048 "flush": true, 00:17:46.048 "reset": true, 00:17:46.048 "nvme_admin": false, 00:17:46.048 "nvme_io": false, 00:17:46.048 "nvme_io_md": false, 00:17:46.048 "write_zeroes": true, 00:17:46.048 "zcopy": true, 00:17:46.048 "get_zone_info": false, 00:17:46.048 "zone_management": false, 00:17:46.048 "zone_append": false, 00:17:46.048 "compare": false, 00:17:46.048 "compare_and_write": false, 00:17:46.048 "abort": true, 00:17:46.048 "seek_hole": false, 00:17:46.048 "seek_data": false, 00:17:46.048 "copy": true, 00:17:46.048 "nvme_iov_md": false 00:17:46.048 }, 00:17:46.048 "memory_domains": [ 00:17:46.048 { 00:17:46.048 "dma_device_id": "system", 00:17:46.048 "dma_device_type": 1 00:17:46.048 }, 00:17:46.048 { 00:17:46.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.048 "dma_device_type": 2 00:17:46.048 } 00:17:46.048 ], 00:17:46.048 "driver_specific": {} 00:17:46.048 }' 00:17:46.048 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:46.306 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.565 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.565 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:46.565 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:46.565 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:46.565 18:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.565 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.565 "name": "BaseBdev3", 00:17:46.565 "aliases": [ 00:17:46.565 "24b6f6d8-9a4b-46bd-9637-e3aca88d490f" 00:17:46.565 ], 00:17:46.565 "product_name": "Malloc disk", 00:17:46.565 "block_size": 512, 00:17:46.565 "num_blocks": 65536, 00:17:46.565 "uuid": "24b6f6d8-9a4b-46bd-9637-e3aca88d490f", 00:17:46.565 "assigned_rate_limits": { 00:17:46.565 "rw_ios_per_sec": 0, 00:17:46.565 "rw_mbytes_per_sec": 0, 00:17:46.565 "r_mbytes_per_sec": 0, 00:17:46.565 "w_mbytes_per_sec": 0 00:17:46.565 }, 00:17:46.565 "claimed": true, 00:17:46.565 "claim_type": "exclusive_write", 00:17:46.565 "zoned": false, 00:17:46.565 "supported_io_types": { 00:17:46.565 "read": true, 00:17:46.565 "write": true, 00:17:46.565 "unmap": true, 00:17:46.565 "flush": true, 00:17:46.565 "reset": true, 00:17:46.565 "nvme_admin": false, 00:17:46.565 "nvme_io": false, 00:17:46.565 "nvme_io_md": false, 00:17:46.565 "write_zeroes": true, 00:17:46.565 "zcopy": true, 00:17:46.565 "get_zone_info": false, 00:17:46.565 "zone_management": false, 00:17:46.565 "zone_append": false, 00:17:46.565 "compare": false, 00:17:46.565 "compare_and_write": false, 00:17:46.565 "abort": true, 00:17:46.565 "seek_hole": false, 00:17:46.565 "seek_data": false, 00:17:46.565 "copy": true, 00:17:46.565 "nvme_iov_md": false 00:17:46.565 }, 00:17:46.565 "memory_domains": [ 00:17:46.565 { 00:17:46.565 "dma_device_id": "system", 00:17:46.565 "dma_device_type": 1 00:17:46.565 }, 00:17:46.565 { 00:17:46.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.565 "dma_device_type": 2 00:17:46.565 } 00:17:46.565 ], 00:17:46.565 "driver_specific": {} 00:17:46.565 }' 00:17:46.565 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:46.824 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.083 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.083 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:47.083 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:47.350 [2024-07-25 18:44:47.751694] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.350 [2024-07-25 18:44:47.751888] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.350 [2024-07-25 18:44:47.752067] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.350 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:47.350 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:47.350 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:47.350 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.351 18:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.662 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.662 "name": "Existed_Raid", 00:17:47.662 "uuid": "0d4b63b9-eafd-4da8-8af5-3ee0d2ec961d", 00:17:47.662 "strip_size_kb": 64, 00:17:47.662 "state": "offline", 00:17:47.662 "raid_level": "raid0", 00:17:47.662 "superblock": false, 00:17:47.662 "num_base_bdevs": 3, 00:17:47.662 "num_base_bdevs_discovered": 2, 00:17:47.662 "num_base_bdevs_operational": 2, 00:17:47.662 "base_bdevs_list": [ 00:17:47.662 { 00:17:47.662 "name": null, 00:17:47.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.662 "is_configured": false, 00:17:47.662 "data_offset": 0, 00:17:47.662 "data_size": 65536 00:17:47.662 }, 00:17:47.662 { 00:17:47.662 "name": "BaseBdev2", 00:17:47.662 "uuid": "ebf19a16-c176-4dc4-9278-8f0f81475b53", 00:17:47.662 "is_configured": true, 00:17:47.662 "data_offset": 0, 00:17:47.662 "data_size": 65536 00:17:47.662 }, 00:17:47.662 { 00:17:47.662 "name": "BaseBdev3", 00:17:47.662 "uuid": "24b6f6d8-9a4b-46bd-9637-e3aca88d490f", 00:17:47.662 "is_configured": true, 00:17:47.662 "data_offset": 0, 00:17:47.662 "data_size": 65536 00:17:47.662 } 00:17:47.662 ] 00:17:47.662 }' 00:17:47.662 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.662 18:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.230 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:48.230 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:48.230 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.230 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:48.488 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:48.488 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.488 18:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:48.746 [2024-07-25 18:44:49.158499] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.746 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:48.746 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:48.746 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.746 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:49.004 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:49.004 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.004 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:49.262 [2024-07-25 18:44:49.727601] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.262 [2024-07-25 18:44:49.727878] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:17:49.262 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:49.262 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:49.262 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.262 18:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:49.521 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:49.521 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:49.521 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:49.521 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:49.521 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:49.521 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:49.780 BaseBdev2 00:17:49.780 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:49.780 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:49.780 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:49.780 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:49.780 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:49.780 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:49.780 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.039 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:50.297 [ 00:17:50.297 { 00:17:50.297 "name": "BaseBdev2", 00:17:50.297 "aliases": [ 00:17:50.297 "a36d8055-c232-42db-8e3a-1b8743e2916a" 00:17:50.297 ], 00:17:50.297 "product_name": "Malloc disk", 00:17:50.297 "block_size": 512, 00:17:50.297 "num_blocks": 65536, 00:17:50.298 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:50.298 "assigned_rate_limits": { 00:17:50.298 "rw_ios_per_sec": 0, 00:17:50.298 "rw_mbytes_per_sec": 0, 00:17:50.298 "r_mbytes_per_sec": 0, 00:17:50.298 "w_mbytes_per_sec": 0 00:17:50.298 }, 00:17:50.298 "claimed": false, 00:17:50.298 "zoned": false, 00:17:50.298 "supported_io_types": { 00:17:50.298 "read": true, 00:17:50.298 "write": true, 00:17:50.298 "unmap": true, 00:17:50.298 "flush": true, 00:17:50.298 "reset": true, 00:17:50.298 "nvme_admin": false, 00:17:50.298 "nvme_io": false, 00:17:50.298 "nvme_io_md": false, 00:17:50.298 "write_zeroes": true, 00:17:50.298 "zcopy": true, 00:17:50.298 "get_zone_info": false, 00:17:50.298 "zone_management": false, 00:17:50.298 "zone_append": false, 00:17:50.298 "compare": false, 00:17:50.298 "compare_and_write": false, 00:17:50.298 "abort": true, 00:17:50.298 "seek_hole": false, 00:17:50.298 "seek_data": false, 00:17:50.298 "copy": true, 00:17:50.298 "nvme_iov_md": false 00:17:50.298 }, 00:17:50.298 "memory_domains": [ 00:17:50.298 { 00:17:50.298 "dma_device_id": "system", 00:17:50.298 "dma_device_type": 1 00:17:50.298 }, 00:17:50.298 { 00:17:50.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.298 "dma_device_type": 2 00:17:50.298 } 00:17:50.298 ], 00:17:50.298 "driver_specific": {} 00:17:50.298 } 00:17:50.298 ] 00:17:50.298 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:50.298 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:50.298 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:50.298 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:50.556 BaseBdev3 00:17:50.556 18:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:50.556 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:50.556 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:50.556 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:50.556 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:50.556 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:50.556 18:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.814 18:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:50.814 [ 00:17:50.814 { 00:17:50.814 "name": "BaseBdev3", 00:17:50.814 "aliases": [ 00:17:50.814 "086a67c1-7307-4b17-bfcd-4b24bcb9fdca" 00:17:50.814 ], 00:17:50.814 "product_name": "Malloc disk", 00:17:50.814 "block_size": 512, 00:17:50.814 "num_blocks": 65536, 00:17:50.814 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:50.814 "assigned_rate_limits": { 00:17:50.814 "rw_ios_per_sec": 0, 00:17:50.814 "rw_mbytes_per_sec": 0, 00:17:50.814 "r_mbytes_per_sec": 0, 00:17:50.814 "w_mbytes_per_sec": 0 00:17:50.814 }, 00:17:50.814 "claimed": false, 00:17:50.814 "zoned": false, 00:17:50.814 "supported_io_types": { 00:17:50.814 "read": true, 00:17:50.814 "write": true, 00:17:50.814 "unmap": true, 00:17:50.814 "flush": true, 00:17:50.814 "reset": true, 00:17:50.814 "nvme_admin": false, 00:17:50.814 "nvme_io": false, 00:17:50.814 "nvme_io_md": false, 00:17:50.814 "write_zeroes": true, 00:17:50.814 "zcopy": true, 00:17:50.814 "get_zone_info": false, 00:17:50.814 "zone_management": false, 00:17:50.814 "zone_append": false, 00:17:50.814 "compare": false, 00:17:50.814 "compare_and_write": false, 00:17:50.814 "abort": true, 00:17:50.814 "seek_hole": false, 00:17:50.814 "seek_data": false, 00:17:50.814 "copy": true, 00:17:50.814 "nvme_iov_md": false 00:17:50.814 }, 00:17:50.814 "memory_domains": [ 00:17:50.814 { 00:17:50.814 "dma_device_id": "system", 00:17:50.814 "dma_device_type": 1 00:17:50.814 }, 00:17:50.814 { 00:17:50.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.815 "dma_device_type": 2 00:17:50.815 } 00:17:50.815 ], 00:17:50.815 "driver_specific": {} 00:17:50.815 } 00:17:50.815 ] 00:17:50.815 18:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:50.815 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:50.815 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:50.815 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:51.073 [2024-07-25 18:44:51.533208] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.073 [2024-07-25 18:44:51.533444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.073 [2024-07-25 18:44:51.533593] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.073 [2024-07-25 18:44:51.535902] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.073 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.332 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.332 "name": "Existed_Raid", 00:17:51.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.332 "strip_size_kb": 64, 00:17:51.332 "state": "configuring", 00:17:51.332 "raid_level": "raid0", 00:17:51.332 "superblock": false, 00:17:51.332 "num_base_bdevs": 3, 00:17:51.332 "num_base_bdevs_discovered": 2, 00:17:51.332 "num_base_bdevs_operational": 3, 00:17:51.332 "base_bdevs_list": [ 00:17:51.332 { 00:17:51.332 "name": "BaseBdev1", 00:17:51.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.332 "is_configured": false, 00:17:51.332 "data_offset": 0, 00:17:51.332 "data_size": 0 00:17:51.332 }, 00:17:51.332 { 00:17:51.332 "name": "BaseBdev2", 00:17:51.332 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:51.332 "is_configured": true, 00:17:51.332 "data_offset": 0, 00:17:51.332 "data_size": 65536 00:17:51.332 }, 00:17:51.332 { 00:17:51.332 "name": "BaseBdev3", 00:17:51.332 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:51.332 "is_configured": true, 00:17:51.332 "data_offset": 0, 00:17:51.332 "data_size": 65536 00:17:51.332 } 00:17:51.332 ] 00:17:51.332 }' 00:17:51.332 18:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.332 18:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.899 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:51.899 [2024-07-25 18:44:52.461380] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.157 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.416 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.416 "name": "Existed_Raid", 00:17:52.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.416 "strip_size_kb": 64, 00:17:52.416 "state": "configuring", 00:17:52.416 "raid_level": "raid0", 00:17:52.416 "superblock": false, 00:17:52.416 "num_base_bdevs": 3, 00:17:52.416 "num_base_bdevs_discovered": 1, 00:17:52.416 "num_base_bdevs_operational": 3, 00:17:52.416 "base_bdevs_list": [ 00:17:52.416 { 00:17:52.416 "name": "BaseBdev1", 00:17:52.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.416 "is_configured": false, 00:17:52.416 "data_offset": 0, 00:17:52.416 "data_size": 0 00:17:52.416 }, 00:17:52.416 { 00:17:52.416 "name": null, 00:17:52.416 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:52.416 "is_configured": false, 00:17:52.416 "data_offset": 0, 00:17:52.416 "data_size": 65536 00:17:52.416 }, 00:17:52.416 { 00:17:52.416 "name": "BaseBdev3", 00:17:52.416 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:52.416 "is_configured": true, 00:17:52.416 "data_offset": 0, 00:17:52.416 "data_size": 65536 00:17:52.416 } 00:17:52.416 ] 00:17:52.416 }' 00:17:52.416 18:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.416 18:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.982 18:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.982 18:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.982 18:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:52.982 18:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:53.241 [2024-07-25 18:44:53.694719] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.241 BaseBdev1 00:17:53.241 18:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:53.241 18:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:53.241 18:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:53.241 18:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:53.241 18:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:53.241 18:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:53.241 18:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.499 18:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:53.757 [ 00:17:53.757 { 00:17:53.757 "name": "BaseBdev1", 00:17:53.757 "aliases": [ 00:17:53.757 "a461df3f-ae78-4028-926c-2dbc5e203b2f" 00:17:53.757 ], 00:17:53.757 "product_name": "Malloc disk", 00:17:53.757 "block_size": 512, 00:17:53.757 "num_blocks": 65536, 00:17:53.757 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:17:53.757 "assigned_rate_limits": { 00:17:53.757 "rw_ios_per_sec": 0, 00:17:53.757 "rw_mbytes_per_sec": 0, 00:17:53.757 "r_mbytes_per_sec": 0, 00:17:53.757 "w_mbytes_per_sec": 0 00:17:53.757 }, 00:17:53.757 "claimed": true, 00:17:53.757 "claim_type": "exclusive_write", 00:17:53.757 "zoned": false, 00:17:53.757 "supported_io_types": { 00:17:53.757 "read": true, 00:17:53.757 "write": true, 00:17:53.757 "unmap": true, 00:17:53.757 "flush": true, 00:17:53.757 "reset": true, 00:17:53.757 "nvme_admin": false, 00:17:53.757 "nvme_io": false, 00:17:53.757 "nvme_io_md": false, 00:17:53.757 "write_zeroes": true, 00:17:53.757 "zcopy": true, 00:17:53.757 "get_zone_info": false, 00:17:53.757 "zone_management": false, 00:17:53.757 "zone_append": false, 00:17:53.757 "compare": false, 00:17:53.757 "compare_and_write": false, 00:17:53.757 "abort": true, 00:17:53.757 "seek_hole": false, 00:17:53.757 "seek_data": false, 00:17:53.757 "copy": true, 00:17:53.757 "nvme_iov_md": false 00:17:53.757 }, 00:17:53.757 "memory_domains": [ 00:17:53.758 { 00:17:53.758 "dma_device_id": "system", 00:17:53.758 "dma_device_type": 1 00:17:53.758 }, 00:17:53.758 { 00:17:53.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.758 "dma_device_type": 2 00:17:53.758 } 00:17:53.758 ], 00:17:53.758 "driver_specific": {} 00:17:53.758 } 00:17:53.758 ] 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.758 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.016 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.016 "name": "Existed_Raid", 00:17:54.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.016 "strip_size_kb": 64, 00:17:54.016 "state": "configuring", 00:17:54.016 "raid_level": "raid0", 00:17:54.016 "superblock": false, 00:17:54.016 "num_base_bdevs": 3, 00:17:54.016 "num_base_bdevs_discovered": 2, 00:17:54.016 "num_base_bdevs_operational": 3, 00:17:54.016 "base_bdevs_list": [ 00:17:54.016 { 00:17:54.016 "name": "BaseBdev1", 00:17:54.016 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:17:54.016 "is_configured": true, 00:17:54.016 "data_offset": 0, 00:17:54.016 "data_size": 65536 00:17:54.016 }, 00:17:54.016 { 00:17:54.016 "name": null, 00:17:54.016 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:54.016 "is_configured": false, 00:17:54.016 "data_offset": 0, 00:17:54.016 "data_size": 65536 00:17:54.016 }, 00:17:54.016 { 00:17:54.016 "name": "BaseBdev3", 00:17:54.016 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:54.016 "is_configured": true, 00:17:54.016 "data_offset": 0, 00:17:54.016 "data_size": 65536 00:17:54.016 } 00:17:54.016 ] 00:17:54.016 }' 00:17:54.016 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.016 18:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.275 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.275 18:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:54.533 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:54.533 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:54.792 [2024-07-25 18:44:55.263126] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.792 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.051 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.051 "name": "Existed_Raid", 00:17:55.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.051 "strip_size_kb": 64, 00:17:55.051 "state": "configuring", 00:17:55.051 "raid_level": "raid0", 00:17:55.051 "superblock": false, 00:17:55.051 "num_base_bdevs": 3, 00:17:55.051 "num_base_bdevs_discovered": 1, 00:17:55.051 "num_base_bdevs_operational": 3, 00:17:55.051 "base_bdevs_list": [ 00:17:55.051 { 00:17:55.051 "name": "BaseBdev1", 00:17:55.051 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:17:55.051 "is_configured": true, 00:17:55.051 "data_offset": 0, 00:17:55.051 "data_size": 65536 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "name": null, 00:17:55.051 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:55.051 "is_configured": false, 00:17:55.051 "data_offset": 0, 00:17:55.051 "data_size": 65536 00:17:55.051 }, 00:17:55.051 { 00:17:55.051 "name": null, 00:17:55.051 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:55.051 "is_configured": false, 00:17:55.051 "data_offset": 0, 00:17:55.051 "data_size": 65536 00:17:55.051 } 00:17:55.051 ] 00:17:55.051 }' 00:17:55.051 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.051 18:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.618 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:55.619 18:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:55.877 [2024-07-25 18:44:56.427386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.877 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.135 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.135 "name": "Existed_Raid", 00:17:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.135 "strip_size_kb": 64, 00:17:56.135 "state": "configuring", 00:17:56.135 "raid_level": "raid0", 00:17:56.135 "superblock": false, 00:17:56.135 "num_base_bdevs": 3, 00:17:56.135 "num_base_bdevs_discovered": 2, 00:17:56.135 "num_base_bdevs_operational": 3, 00:17:56.135 "base_bdevs_list": [ 00:17:56.135 { 00:17:56.135 "name": "BaseBdev1", 00:17:56.135 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:17:56.135 "is_configured": true, 00:17:56.135 "data_offset": 0, 00:17:56.135 "data_size": 65536 00:17:56.135 }, 00:17:56.135 { 00:17:56.135 "name": null, 00:17:56.135 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:56.135 "is_configured": false, 00:17:56.135 "data_offset": 0, 00:17:56.135 "data_size": 65536 00:17:56.135 }, 00:17:56.135 { 00:17:56.135 "name": "BaseBdev3", 00:17:56.135 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:56.135 "is_configured": true, 00:17:56.135 "data_offset": 0, 00:17:56.135 "data_size": 65536 00:17:56.135 } 00:17:56.135 ] 00:17:56.135 }' 00:17:56.135 18:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.135 18:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.703 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.703 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:56.962 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:56.962 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:56.962 [2024-07-25 18:44:57.515627] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.221 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.479 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.479 "name": "Existed_Raid", 00:17:57.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.479 "strip_size_kb": 64, 00:17:57.479 "state": "configuring", 00:17:57.479 "raid_level": "raid0", 00:17:57.479 "superblock": false, 00:17:57.479 "num_base_bdevs": 3, 00:17:57.479 "num_base_bdevs_discovered": 1, 00:17:57.479 "num_base_bdevs_operational": 3, 00:17:57.479 "base_bdevs_list": [ 00:17:57.479 { 00:17:57.479 "name": null, 00:17:57.479 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:17:57.479 "is_configured": false, 00:17:57.479 "data_offset": 0, 00:17:57.479 "data_size": 65536 00:17:57.479 }, 00:17:57.479 { 00:17:57.479 "name": null, 00:17:57.479 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:57.479 "is_configured": false, 00:17:57.479 "data_offset": 0, 00:17:57.479 "data_size": 65536 00:17:57.479 }, 00:17:57.479 { 00:17:57.479 "name": "BaseBdev3", 00:17:57.479 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:57.479 "is_configured": true, 00:17:57.479 "data_offset": 0, 00:17:57.479 "data_size": 65536 00:17:57.479 } 00:17:57.479 ] 00:17:57.479 }' 00:17:57.479 18:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.479 18:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.045 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.045 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:58.304 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:58.304 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:58.563 [2024-07-25 18:44:58.890200] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.563 18:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.563 18:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.563 "name": "Existed_Raid", 00:17:58.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.563 "strip_size_kb": 64, 00:17:58.563 "state": "configuring", 00:17:58.563 "raid_level": "raid0", 00:17:58.563 "superblock": false, 00:17:58.563 "num_base_bdevs": 3, 00:17:58.563 "num_base_bdevs_discovered": 2, 00:17:58.563 "num_base_bdevs_operational": 3, 00:17:58.563 "base_bdevs_list": [ 00:17:58.563 { 00:17:58.563 "name": null, 00:17:58.563 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:17:58.563 "is_configured": false, 00:17:58.563 "data_offset": 0, 00:17:58.563 "data_size": 65536 00:17:58.563 }, 00:17:58.563 { 00:17:58.563 "name": "BaseBdev2", 00:17:58.563 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:17:58.563 "is_configured": true, 00:17:58.563 "data_offset": 0, 00:17:58.563 "data_size": 65536 00:17:58.563 }, 00:17:58.563 { 00:17:58.563 "name": "BaseBdev3", 00:17:58.563 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:17:58.563 "is_configured": true, 00:17:58.563 "data_offset": 0, 00:17:58.563 "data_size": 65536 00:17:58.563 } 00:17:58.563 ] 00:17:58.563 }' 00:17:58.563 18:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.563 18:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.130 18:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.130 18:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:59.389 18:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:59.389 18:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.389 18:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:59.647 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a461df3f-ae78-4028-926c-2dbc5e203b2f 00:17:59.905 [2024-07-25 18:45:00.251503] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:59.905 [2024-07-25 18:45:00.251560] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:17:59.905 [2024-07-25 18:45:00.251569] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:59.905 [2024-07-25 18:45:00.251681] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:59.905 [2024-07-25 18:45:00.252005] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:17:59.905 [2024-07-25 18:45:00.252015] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:17:59.905 [2024-07-25 18:45:00.252263] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.905 NewBaseBdev 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.905 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:00.163 [ 00:18:00.163 { 00:18:00.163 "name": "NewBaseBdev", 00:18:00.163 "aliases": [ 00:18:00.163 "a461df3f-ae78-4028-926c-2dbc5e203b2f" 00:18:00.163 ], 00:18:00.163 "product_name": "Malloc disk", 00:18:00.163 "block_size": 512, 00:18:00.163 "num_blocks": 65536, 00:18:00.163 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:18:00.163 "assigned_rate_limits": { 00:18:00.163 "rw_ios_per_sec": 0, 00:18:00.163 "rw_mbytes_per_sec": 0, 00:18:00.163 "r_mbytes_per_sec": 0, 00:18:00.163 "w_mbytes_per_sec": 0 00:18:00.163 }, 00:18:00.163 "claimed": true, 00:18:00.163 "claim_type": "exclusive_write", 00:18:00.163 "zoned": false, 00:18:00.163 "supported_io_types": { 00:18:00.163 "read": true, 00:18:00.163 "write": true, 00:18:00.163 "unmap": true, 00:18:00.163 "flush": true, 00:18:00.163 "reset": true, 00:18:00.163 "nvme_admin": false, 00:18:00.163 "nvme_io": false, 00:18:00.163 "nvme_io_md": false, 00:18:00.163 "write_zeroes": true, 00:18:00.163 "zcopy": true, 00:18:00.163 "get_zone_info": false, 00:18:00.163 "zone_management": false, 00:18:00.163 "zone_append": false, 00:18:00.163 "compare": false, 00:18:00.163 "compare_and_write": false, 00:18:00.163 "abort": true, 00:18:00.163 "seek_hole": false, 00:18:00.163 "seek_data": false, 00:18:00.163 "copy": true, 00:18:00.163 "nvme_iov_md": false 00:18:00.163 }, 00:18:00.163 "memory_domains": [ 00:18:00.163 { 00:18:00.163 "dma_device_id": "system", 00:18:00.163 "dma_device_type": 1 00:18:00.163 }, 00:18:00.163 { 00:18:00.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.163 "dma_device_type": 2 00:18:00.163 } 00:18:00.163 ], 00:18:00.163 "driver_specific": {} 00:18:00.163 } 00:18:00.163 ] 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.163 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.420 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.420 "name": "Existed_Raid", 00:18:00.420 "uuid": "313653a2-93d3-4e65-8cfe-a4870143a351", 00:18:00.420 "strip_size_kb": 64, 00:18:00.420 "state": "online", 00:18:00.420 "raid_level": "raid0", 00:18:00.420 "superblock": false, 00:18:00.420 "num_base_bdevs": 3, 00:18:00.420 "num_base_bdevs_discovered": 3, 00:18:00.420 "num_base_bdevs_operational": 3, 00:18:00.420 "base_bdevs_list": [ 00:18:00.420 { 00:18:00.420 "name": "NewBaseBdev", 00:18:00.420 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:18:00.420 "is_configured": true, 00:18:00.420 "data_offset": 0, 00:18:00.420 "data_size": 65536 00:18:00.420 }, 00:18:00.420 { 00:18:00.420 "name": "BaseBdev2", 00:18:00.420 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:18:00.420 "is_configured": true, 00:18:00.420 "data_offset": 0, 00:18:00.420 "data_size": 65536 00:18:00.420 }, 00:18:00.420 { 00:18:00.420 "name": "BaseBdev3", 00:18:00.420 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:18:00.420 "is_configured": true, 00:18:00.420 "data_offset": 0, 00:18:00.420 "data_size": 65536 00:18:00.420 } 00:18:00.420 ] 00:18:00.420 }' 00:18:00.420 18:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.420 18:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:00.985 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:01.244 [2024-07-25 18:45:01.620090] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.244 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:01.244 "name": "Existed_Raid", 00:18:01.244 "aliases": [ 00:18:01.244 "313653a2-93d3-4e65-8cfe-a4870143a351" 00:18:01.244 ], 00:18:01.244 "product_name": "Raid Volume", 00:18:01.244 "block_size": 512, 00:18:01.244 "num_blocks": 196608, 00:18:01.244 "uuid": "313653a2-93d3-4e65-8cfe-a4870143a351", 00:18:01.244 "assigned_rate_limits": { 00:18:01.244 "rw_ios_per_sec": 0, 00:18:01.244 "rw_mbytes_per_sec": 0, 00:18:01.244 "r_mbytes_per_sec": 0, 00:18:01.244 "w_mbytes_per_sec": 0 00:18:01.244 }, 00:18:01.244 "claimed": false, 00:18:01.244 "zoned": false, 00:18:01.244 "supported_io_types": { 00:18:01.244 "read": true, 00:18:01.244 "write": true, 00:18:01.244 "unmap": true, 00:18:01.244 "flush": true, 00:18:01.244 "reset": true, 00:18:01.244 "nvme_admin": false, 00:18:01.244 "nvme_io": false, 00:18:01.244 "nvme_io_md": false, 00:18:01.244 "write_zeroes": true, 00:18:01.244 "zcopy": false, 00:18:01.244 "get_zone_info": false, 00:18:01.244 "zone_management": false, 00:18:01.244 "zone_append": false, 00:18:01.244 "compare": false, 00:18:01.244 "compare_and_write": false, 00:18:01.244 "abort": false, 00:18:01.244 "seek_hole": false, 00:18:01.244 "seek_data": false, 00:18:01.244 "copy": false, 00:18:01.244 "nvme_iov_md": false 00:18:01.244 }, 00:18:01.244 "memory_domains": [ 00:18:01.244 { 00:18:01.244 "dma_device_id": "system", 00:18:01.244 "dma_device_type": 1 00:18:01.244 }, 00:18:01.244 { 00:18:01.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.244 "dma_device_type": 2 00:18:01.244 }, 00:18:01.244 { 00:18:01.244 "dma_device_id": "system", 00:18:01.244 "dma_device_type": 1 00:18:01.244 }, 00:18:01.244 { 00:18:01.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.244 "dma_device_type": 2 00:18:01.244 }, 00:18:01.244 { 00:18:01.244 "dma_device_id": "system", 00:18:01.244 "dma_device_type": 1 00:18:01.244 }, 00:18:01.244 { 00:18:01.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.244 "dma_device_type": 2 00:18:01.244 } 00:18:01.244 ], 00:18:01.244 "driver_specific": { 00:18:01.244 "raid": { 00:18:01.244 "uuid": "313653a2-93d3-4e65-8cfe-a4870143a351", 00:18:01.244 "strip_size_kb": 64, 00:18:01.244 "state": "online", 00:18:01.244 "raid_level": "raid0", 00:18:01.244 "superblock": false, 00:18:01.244 "num_base_bdevs": 3, 00:18:01.244 "num_base_bdevs_discovered": 3, 00:18:01.244 "num_base_bdevs_operational": 3, 00:18:01.244 "base_bdevs_list": [ 00:18:01.244 { 00:18:01.244 "name": "NewBaseBdev", 00:18:01.244 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:18:01.244 "is_configured": true, 00:18:01.244 "data_offset": 0, 00:18:01.244 "data_size": 65536 00:18:01.244 }, 00:18:01.244 { 00:18:01.244 "name": "BaseBdev2", 00:18:01.244 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:18:01.244 "is_configured": true, 00:18:01.244 "data_offset": 0, 00:18:01.244 "data_size": 65536 00:18:01.244 }, 00:18:01.244 { 00:18:01.244 "name": "BaseBdev3", 00:18:01.244 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:18:01.244 "is_configured": true, 00:18:01.244 "data_offset": 0, 00:18:01.244 "data_size": 65536 00:18:01.244 } 00:18:01.244 ] 00:18:01.244 } 00:18:01.244 } 00:18:01.244 }' 00:18:01.244 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.244 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:01.244 BaseBdev2 00:18:01.244 BaseBdev3' 00:18:01.244 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.244 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.244 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:01.503 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.503 "name": "NewBaseBdev", 00:18:01.503 "aliases": [ 00:18:01.503 "a461df3f-ae78-4028-926c-2dbc5e203b2f" 00:18:01.503 ], 00:18:01.503 "product_name": "Malloc disk", 00:18:01.503 "block_size": 512, 00:18:01.503 "num_blocks": 65536, 00:18:01.503 "uuid": "a461df3f-ae78-4028-926c-2dbc5e203b2f", 00:18:01.503 "assigned_rate_limits": { 00:18:01.503 "rw_ios_per_sec": 0, 00:18:01.503 "rw_mbytes_per_sec": 0, 00:18:01.503 "r_mbytes_per_sec": 0, 00:18:01.503 "w_mbytes_per_sec": 0 00:18:01.503 }, 00:18:01.503 "claimed": true, 00:18:01.503 "claim_type": "exclusive_write", 00:18:01.503 "zoned": false, 00:18:01.503 "supported_io_types": { 00:18:01.503 "read": true, 00:18:01.503 "write": true, 00:18:01.503 "unmap": true, 00:18:01.503 "flush": true, 00:18:01.503 "reset": true, 00:18:01.503 "nvme_admin": false, 00:18:01.503 "nvme_io": false, 00:18:01.503 "nvme_io_md": false, 00:18:01.503 "write_zeroes": true, 00:18:01.503 "zcopy": true, 00:18:01.503 "get_zone_info": false, 00:18:01.503 "zone_management": false, 00:18:01.503 "zone_append": false, 00:18:01.503 "compare": false, 00:18:01.503 "compare_and_write": false, 00:18:01.503 "abort": true, 00:18:01.503 "seek_hole": false, 00:18:01.503 "seek_data": false, 00:18:01.503 "copy": true, 00:18:01.503 "nvme_iov_md": false 00:18:01.503 }, 00:18:01.503 "memory_domains": [ 00:18:01.503 { 00:18:01.503 "dma_device_id": "system", 00:18:01.503 "dma_device_type": 1 00:18:01.503 }, 00:18:01.503 { 00:18:01.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.503 "dma_device_type": 2 00:18:01.503 } 00:18:01.503 ], 00:18:01.503 "driver_specific": {} 00:18:01.503 }' 00:18:01.503 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.503 18:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.503 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.503 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.503 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:01.762 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.022 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.022 "name": "BaseBdev2", 00:18:02.022 "aliases": [ 00:18:02.022 "a36d8055-c232-42db-8e3a-1b8743e2916a" 00:18:02.022 ], 00:18:02.022 "product_name": "Malloc disk", 00:18:02.022 "block_size": 512, 00:18:02.022 "num_blocks": 65536, 00:18:02.022 "uuid": "a36d8055-c232-42db-8e3a-1b8743e2916a", 00:18:02.022 "assigned_rate_limits": { 00:18:02.022 "rw_ios_per_sec": 0, 00:18:02.022 "rw_mbytes_per_sec": 0, 00:18:02.022 "r_mbytes_per_sec": 0, 00:18:02.022 "w_mbytes_per_sec": 0 00:18:02.022 }, 00:18:02.022 "claimed": true, 00:18:02.022 "claim_type": "exclusive_write", 00:18:02.022 "zoned": false, 00:18:02.022 "supported_io_types": { 00:18:02.022 "read": true, 00:18:02.022 "write": true, 00:18:02.022 "unmap": true, 00:18:02.022 "flush": true, 00:18:02.022 "reset": true, 00:18:02.022 "nvme_admin": false, 00:18:02.022 "nvme_io": false, 00:18:02.023 "nvme_io_md": false, 00:18:02.023 "write_zeroes": true, 00:18:02.023 "zcopy": true, 00:18:02.023 "get_zone_info": false, 00:18:02.023 "zone_management": false, 00:18:02.023 "zone_append": false, 00:18:02.023 "compare": false, 00:18:02.023 "compare_and_write": false, 00:18:02.023 "abort": true, 00:18:02.023 "seek_hole": false, 00:18:02.023 "seek_data": false, 00:18:02.023 "copy": true, 00:18:02.023 "nvme_iov_md": false 00:18:02.023 }, 00:18:02.023 "memory_domains": [ 00:18:02.023 { 00:18:02.023 "dma_device_id": "system", 00:18:02.023 "dma_device_type": 1 00:18:02.023 }, 00:18:02.023 { 00:18:02.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.023 "dma_device_type": 2 00:18:02.023 } 00:18:02.023 ], 00:18:02.023 "driver_specific": {} 00:18:02.023 }' 00:18:02.023 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.023 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.023 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.023 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.023 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.281 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:02.282 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.540 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.540 "name": "BaseBdev3", 00:18:02.540 "aliases": [ 00:18:02.540 "086a67c1-7307-4b17-bfcd-4b24bcb9fdca" 00:18:02.540 ], 00:18:02.540 "product_name": "Malloc disk", 00:18:02.540 "block_size": 512, 00:18:02.540 "num_blocks": 65536, 00:18:02.540 "uuid": "086a67c1-7307-4b17-bfcd-4b24bcb9fdca", 00:18:02.540 "assigned_rate_limits": { 00:18:02.540 "rw_ios_per_sec": 0, 00:18:02.540 "rw_mbytes_per_sec": 0, 00:18:02.540 "r_mbytes_per_sec": 0, 00:18:02.540 "w_mbytes_per_sec": 0 00:18:02.540 }, 00:18:02.540 "claimed": true, 00:18:02.540 "claim_type": "exclusive_write", 00:18:02.540 "zoned": false, 00:18:02.540 "supported_io_types": { 00:18:02.540 "read": true, 00:18:02.540 "write": true, 00:18:02.540 "unmap": true, 00:18:02.540 "flush": true, 00:18:02.540 "reset": true, 00:18:02.540 "nvme_admin": false, 00:18:02.540 "nvme_io": false, 00:18:02.540 "nvme_io_md": false, 00:18:02.540 "write_zeroes": true, 00:18:02.540 "zcopy": true, 00:18:02.540 "get_zone_info": false, 00:18:02.540 "zone_management": false, 00:18:02.540 "zone_append": false, 00:18:02.540 "compare": false, 00:18:02.540 "compare_and_write": false, 00:18:02.540 "abort": true, 00:18:02.540 "seek_hole": false, 00:18:02.540 "seek_data": false, 00:18:02.540 "copy": true, 00:18:02.540 "nvme_iov_md": false 00:18:02.540 }, 00:18:02.540 "memory_domains": [ 00:18:02.540 { 00:18:02.540 "dma_device_id": "system", 00:18:02.540 "dma_device_type": 1 00:18:02.540 }, 00:18:02.540 { 00:18:02.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.540 "dma_device_type": 2 00:18:02.540 } 00:18:02.540 ], 00:18:02.540 "driver_specific": {} 00:18:02.540 }' 00:18:02.540 18:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.540 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.540 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.540 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.828 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.828 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.828 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.828 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.828 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.829 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.829 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.829 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.829 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:03.145 [2024-07-25 18:45:03.588189] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:03.145 [2024-07-25 18:45:03.588226] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.145 [2024-07-25 18:45:03.588307] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.145 [2024-07-25 18:45:03.588371] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.145 [2024-07-25 18:45:03.588380] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 124749 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 124749 ']' 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 124749 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 124749 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 124749' 00:18:03.145 killing process with pid 124749 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 124749 00:18:03.145 [2024-07-25 18:45:03.629991] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.145 18:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 124749 00:18:03.402 [2024-07-25 18:45:03.879552] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.780 ************************************ 00:18:04.780 END TEST raid_state_function_test 00:18:04.780 ************************************ 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:04.780 00:18:04.780 real 0m27.536s 00:18:04.780 user 0m49.229s 00:18:04.780 sys 0m4.679s 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.780 18:45:05 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:18:04.780 18:45:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:04.780 18:45:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:04.780 18:45:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.780 ************************************ 00:18:04.780 START TEST raid_state_function_test_sb 00:18:04.780 ************************************ 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=125703 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 125703' 00:18:04.780 Process raid pid: 125703 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 125703 /var/tmp/spdk-raid.sock 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 125703 ']' 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.780 18:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.780 [2024-07-25 18:45:05.235523] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:04.780 [2024-07-25 18:45:05.235776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.039 [2024-07-25 18:45:05.420156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.297 [2024-07-25 18:45:05.621242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.297 [2024-07-25 18:45:05.816611] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:05.866 [2024-07-25 18:45:06.368412] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.866 [2024-07-25 18:45:06.368526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.866 [2024-07-25 18:45:06.368551] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.866 [2024-07-25 18:45:06.368600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.866 [2024-07-25 18:45:06.368613] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.866 [2024-07-25 18:45:06.368654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.866 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.125 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.125 "name": "Existed_Raid", 00:18:06.125 "uuid": "467b2503-978e-46bb-a8ef-cb1d28edb5d6", 00:18:06.125 "strip_size_kb": 64, 00:18:06.125 "state": "configuring", 00:18:06.125 "raid_level": "raid0", 00:18:06.125 "superblock": true, 00:18:06.125 "num_base_bdevs": 3, 00:18:06.125 "num_base_bdevs_discovered": 0, 00:18:06.125 "num_base_bdevs_operational": 3, 00:18:06.125 "base_bdevs_list": [ 00:18:06.125 { 00:18:06.125 "name": "BaseBdev1", 00:18:06.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.125 "is_configured": false, 00:18:06.125 "data_offset": 0, 00:18:06.125 "data_size": 0 00:18:06.125 }, 00:18:06.125 { 00:18:06.125 "name": "BaseBdev2", 00:18:06.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.125 "is_configured": false, 00:18:06.125 "data_offset": 0, 00:18:06.125 "data_size": 0 00:18:06.125 }, 00:18:06.125 { 00:18:06.125 "name": "BaseBdev3", 00:18:06.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.125 "is_configured": false, 00:18:06.125 "data_offset": 0, 00:18:06.125 "data_size": 0 00:18:06.125 } 00:18:06.125 ] 00:18:06.125 }' 00:18:06.125 18:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.125 18:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.693 18:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:06.952 [2024-07-25 18:45:07.448478] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.952 [2024-07-25 18:45:07.448532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:18:06.952 18:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:07.211 [2024-07-25 18:45:07.652565] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.211 [2024-07-25 18:45:07.652659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.211 [2024-07-25 18:45:07.652677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.211 [2024-07-25 18:45:07.652706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.211 [2024-07-25 18:45:07.652717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.211 [2024-07-25 18:45:07.652754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.211 18:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.471 [2024-07-25 18:45:07.877407] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.471 BaseBdev1 00:18:07.471 18:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:07.471 18:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:07.471 18:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:07.471 18:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:07.471 18:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:07.471 18:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:07.471 18:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.730 18:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:07.990 [ 00:18:07.990 { 00:18:07.990 "name": "BaseBdev1", 00:18:07.990 "aliases": [ 00:18:07.990 "deec55f4-284e-488f-b90e-6dd454fd7972" 00:18:07.990 ], 00:18:07.990 "product_name": "Malloc disk", 00:18:07.990 "block_size": 512, 00:18:07.990 "num_blocks": 65536, 00:18:07.990 "uuid": "deec55f4-284e-488f-b90e-6dd454fd7972", 00:18:07.990 "assigned_rate_limits": { 00:18:07.990 "rw_ios_per_sec": 0, 00:18:07.990 "rw_mbytes_per_sec": 0, 00:18:07.990 "r_mbytes_per_sec": 0, 00:18:07.990 "w_mbytes_per_sec": 0 00:18:07.990 }, 00:18:07.990 "claimed": true, 00:18:07.990 "claim_type": "exclusive_write", 00:18:07.990 "zoned": false, 00:18:07.990 "supported_io_types": { 00:18:07.990 "read": true, 00:18:07.990 "write": true, 00:18:07.990 "unmap": true, 00:18:07.990 "flush": true, 00:18:07.990 "reset": true, 00:18:07.990 "nvme_admin": false, 00:18:07.990 "nvme_io": false, 00:18:07.990 "nvme_io_md": false, 00:18:07.990 "write_zeroes": true, 00:18:07.990 "zcopy": true, 00:18:07.990 "get_zone_info": false, 00:18:07.990 "zone_management": false, 00:18:07.990 "zone_append": false, 00:18:07.990 "compare": false, 00:18:07.990 "compare_and_write": false, 00:18:07.990 "abort": true, 00:18:07.990 "seek_hole": false, 00:18:07.990 "seek_data": false, 00:18:07.990 "copy": true, 00:18:07.990 "nvme_iov_md": false 00:18:07.990 }, 00:18:07.990 "memory_domains": [ 00:18:07.990 { 00:18:07.990 "dma_device_id": "system", 00:18:07.990 "dma_device_type": 1 00:18:07.990 }, 00:18:07.990 { 00:18:07.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.990 "dma_device_type": 2 00:18:07.990 } 00:18:07.990 ], 00:18:07.990 "driver_specific": {} 00:18:07.990 } 00:18:07.990 ] 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.990 "name": "Existed_Raid", 00:18:07.990 "uuid": "931e9084-e61b-4c64-9c84-5785a267fb9d", 00:18:07.990 "strip_size_kb": 64, 00:18:07.990 "state": "configuring", 00:18:07.990 "raid_level": "raid0", 00:18:07.990 "superblock": true, 00:18:07.990 "num_base_bdevs": 3, 00:18:07.990 "num_base_bdevs_discovered": 1, 00:18:07.990 "num_base_bdevs_operational": 3, 00:18:07.990 "base_bdevs_list": [ 00:18:07.990 { 00:18:07.990 "name": "BaseBdev1", 00:18:07.990 "uuid": "deec55f4-284e-488f-b90e-6dd454fd7972", 00:18:07.990 "is_configured": true, 00:18:07.990 "data_offset": 2048, 00:18:07.990 "data_size": 63488 00:18:07.990 }, 00:18:07.990 { 00:18:07.990 "name": "BaseBdev2", 00:18:07.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.990 "is_configured": false, 00:18:07.990 "data_offset": 0, 00:18:07.990 "data_size": 0 00:18:07.990 }, 00:18:07.990 { 00:18:07.990 "name": "BaseBdev3", 00:18:07.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.990 "is_configured": false, 00:18:07.990 "data_offset": 0, 00:18:07.990 "data_size": 0 00:18:07.990 } 00:18:07.990 ] 00:18:07.990 }' 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.990 18:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.558 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:08.817 [2024-07-25 18:45:09.385766] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.817 [2024-07-25 18:45:09.385856] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:09.076 [2024-07-25 18:45:09.617902] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.076 [2024-07-25 18:45:09.620289] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.076 [2024-07-25 18:45:09.620390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.076 [2024-07-25 18:45:09.620410] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.076 [2024-07-25 18:45:09.620491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.076 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.335 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.335 "name": "Existed_Raid", 00:18:09.335 "uuid": "8fd225ca-9594-41f9-b56d-c6b57742c2f0", 00:18:09.335 "strip_size_kb": 64, 00:18:09.335 "state": "configuring", 00:18:09.335 "raid_level": "raid0", 00:18:09.335 "superblock": true, 00:18:09.335 "num_base_bdevs": 3, 00:18:09.335 "num_base_bdevs_discovered": 1, 00:18:09.335 "num_base_bdevs_operational": 3, 00:18:09.335 "base_bdevs_list": [ 00:18:09.335 { 00:18:09.335 "name": "BaseBdev1", 00:18:09.335 "uuid": "deec55f4-284e-488f-b90e-6dd454fd7972", 00:18:09.335 "is_configured": true, 00:18:09.335 "data_offset": 2048, 00:18:09.335 "data_size": 63488 00:18:09.335 }, 00:18:09.335 { 00:18:09.335 "name": "BaseBdev2", 00:18:09.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.335 "is_configured": false, 00:18:09.335 "data_offset": 0, 00:18:09.335 "data_size": 0 00:18:09.335 }, 00:18:09.335 { 00:18:09.335 "name": "BaseBdev3", 00:18:09.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.335 "is_configured": false, 00:18:09.335 "data_offset": 0, 00:18:09.335 "data_size": 0 00:18:09.335 } 00:18:09.335 ] 00:18:09.335 }' 00:18:09.335 18:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.335 18:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.903 18:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.161 [2024-07-25 18:45:10.648478] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.161 BaseBdev2 00:18:10.161 18:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:10.161 18:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:10.161 18:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:10.161 18:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:10.161 18:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:10.161 18:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:10.162 18:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.420 18:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.679 [ 00:18:10.679 { 00:18:10.679 "name": "BaseBdev2", 00:18:10.679 "aliases": [ 00:18:10.679 "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a" 00:18:10.679 ], 00:18:10.679 "product_name": "Malloc disk", 00:18:10.679 "block_size": 512, 00:18:10.679 "num_blocks": 65536, 00:18:10.679 "uuid": "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a", 00:18:10.679 "assigned_rate_limits": { 00:18:10.679 "rw_ios_per_sec": 0, 00:18:10.679 "rw_mbytes_per_sec": 0, 00:18:10.679 "r_mbytes_per_sec": 0, 00:18:10.679 "w_mbytes_per_sec": 0 00:18:10.679 }, 00:18:10.679 "claimed": true, 00:18:10.679 "claim_type": "exclusive_write", 00:18:10.679 "zoned": false, 00:18:10.679 "supported_io_types": { 00:18:10.679 "read": true, 00:18:10.679 "write": true, 00:18:10.679 "unmap": true, 00:18:10.679 "flush": true, 00:18:10.679 "reset": true, 00:18:10.679 "nvme_admin": false, 00:18:10.679 "nvme_io": false, 00:18:10.679 "nvme_io_md": false, 00:18:10.679 "write_zeroes": true, 00:18:10.679 "zcopy": true, 00:18:10.679 "get_zone_info": false, 00:18:10.679 "zone_management": false, 00:18:10.679 "zone_append": false, 00:18:10.679 "compare": false, 00:18:10.679 "compare_and_write": false, 00:18:10.679 "abort": true, 00:18:10.679 "seek_hole": false, 00:18:10.679 "seek_data": false, 00:18:10.679 "copy": true, 00:18:10.679 "nvme_iov_md": false 00:18:10.679 }, 00:18:10.679 "memory_domains": [ 00:18:10.679 { 00:18:10.679 "dma_device_id": "system", 00:18:10.679 "dma_device_type": 1 00:18:10.679 }, 00:18:10.679 { 00:18:10.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.679 "dma_device_type": 2 00:18:10.679 } 00:18:10.679 ], 00:18:10.679 "driver_specific": {} 00:18:10.679 } 00:18:10.679 ] 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.679 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.938 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.938 "name": "Existed_Raid", 00:18:10.938 "uuid": "8fd225ca-9594-41f9-b56d-c6b57742c2f0", 00:18:10.938 "strip_size_kb": 64, 00:18:10.938 "state": "configuring", 00:18:10.938 "raid_level": "raid0", 00:18:10.938 "superblock": true, 00:18:10.938 "num_base_bdevs": 3, 00:18:10.938 "num_base_bdevs_discovered": 2, 00:18:10.938 "num_base_bdevs_operational": 3, 00:18:10.938 "base_bdevs_list": [ 00:18:10.938 { 00:18:10.938 "name": "BaseBdev1", 00:18:10.938 "uuid": "deec55f4-284e-488f-b90e-6dd454fd7972", 00:18:10.938 "is_configured": true, 00:18:10.938 "data_offset": 2048, 00:18:10.938 "data_size": 63488 00:18:10.938 }, 00:18:10.938 { 00:18:10.938 "name": "BaseBdev2", 00:18:10.938 "uuid": "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a", 00:18:10.938 "is_configured": true, 00:18:10.938 "data_offset": 2048, 00:18:10.938 "data_size": 63488 00:18:10.938 }, 00:18:10.938 { 00:18:10.938 "name": "BaseBdev3", 00:18:10.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.938 "is_configured": false, 00:18:10.938 "data_offset": 0, 00:18:10.938 "data_size": 0 00:18:10.938 } 00:18:10.938 ] 00:18:10.938 }' 00:18:10.938 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.938 18:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.506 18:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.765 [2024-07-25 18:45:12.274728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.765 [2024-07-25 18:45:12.275008] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:18:11.765 [2024-07-25 18:45:12.275021] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:11.765 [2024-07-25 18:45:12.275165] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:11.765 [2024-07-25 18:45:12.275500] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:18:11.765 [2024-07-25 18:45:12.275519] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:18:11.765 [2024-07-25 18:45:12.275650] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.765 BaseBdev3 00:18:11.765 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:11.765 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:11.765 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:11.765 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:11.765 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:11.765 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:11.765 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.024 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:12.283 [ 00:18:12.283 { 00:18:12.283 "name": "BaseBdev3", 00:18:12.283 "aliases": [ 00:18:12.283 "12578ab2-3d37-47c0-8678-8d0173625457" 00:18:12.283 ], 00:18:12.283 "product_name": "Malloc disk", 00:18:12.283 "block_size": 512, 00:18:12.283 "num_blocks": 65536, 00:18:12.283 "uuid": "12578ab2-3d37-47c0-8678-8d0173625457", 00:18:12.283 "assigned_rate_limits": { 00:18:12.283 "rw_ios_per_sec": 0, 00:18:12.283 "rw_mbytes_per_sec": 0, 00:18:12.283 "r_mbytes_per_sec": 0, 00:18:12.283 "w_mbytes_per_sec": 0 00:18:12.283 }, 00:18:12.283 "claimed": true, 00:18:12.283 "claim_type": "exclusive_write", 00:18:12.283 "zoned": false, 00:18:12.283 "supported_io_types": { 00:18:12.283 "read": true, 00:18:12.283 "write": true, 00:18:12.283 "unmap": true, 00:18:12.283 "flush": true, 00:18:12.283 "reset": true, 00:18:12.283 "nvme_admin": false, 00:18:12.283 "nvme_io": false, 00:18:12.283 "nvme_io_md": false, 00:18:12.283 "write_zeroes": true, 00:18:12.283 "zcopy": true, 00:18:12.283 "get_zone_info": false, 00:18:12.283 "zone_management": false, 00:18:12.283 "zone_append": false, 00:18:12.283 "compare": false, 00:18:12.283 "compare_and_write": false, 00:18:12.283 "abort": true, 00:18:12.283 "seek_hole": false, 00:18:12.283 "seek_data": false, 00:18:12.283 "copy": true, 00:18:12.283 "nvme_iov_md": false 00:18:12.283 }, 00:18:12.283 "memory_domains": [ 00:18:12.283 { 00:18:12.283 "dma_device_id": "system", 00:18:12.283 "dma_device_type": 1 00:18:12.283 }, 00:18:12.283 { 00:18:12.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.283 "dma_device_type": 2 00:18:12.283 } 00:18:12.283 ], 00:18:12.283 "driver_specific": {} 00:18:12.283 } 00:18:12.283 ] 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.283 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.542 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.543 "name": "Existed_Raid", 00:18:12.543 "uuid": "8fd225ca-9594-41f9-b56d-c6b57742c2f0", 00:18:12.543 "strip_size_kb": 64, 00:18:12.543 "state": "online", 00:18:12.543 "raid_level": "raid0", 00:18:12.543 "superblock": true, 00:18:12.543 "num_base_bdevs": 3, 00:18:12.543 "num_base_bdevs_discovered": 3, 00:18:12.543 "num_base_bdevs_operational": 3, 00:18:12.543 "base_bdevs_list": [ 00:18:12.543 { 00:18:12.543 "name": "BaseBdev1", 00:18:12.543 "uuid": "deec55f4-284e-488f-b90e-6dd454fd7972", 00:18:12.543 "is_configured": true, 00:18:12.543 "data_offset": 2048, 00:18:12.543 "data_size": 63488 00:18:12.543 }, 00:18:12.543 { 00:18:12.543 "name": "BaseBdev2", 00:18:12.543 "uuid": "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a", 00:18:12.543 "is_configured": true, 00:18:12.543 "data_offset": 2048, 00:18:12.543 "data_size": 63488 00:18:12.543 }, 00:18:12.543 { 00:18:12.543 "name": "BaseBdev3", 00:18:12.543 "uuid": "12578ab2-3d37-47c0-8678-8d0173625457", 00:18:12.543 "is_configured": true, 00:18:12.543 "data_offset": 2048, 00:18:12.543 "data_size": 63488 00:18:12.543 } 00:18:12.543 ] 00:18:12.543 }' 00:18:12.543 18:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.543 18:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:13.109 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:13.367 [2024-07-25 18:45:13.790470] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.367 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:13.367 "name": "Existed_Raid", 00:18:13.367 "aliases": [ 00:18:13.367 "8fd225ca-9594-41f9-b56d-c6b57742c2f0" 00:18:13.367 ], 00:18:13.367 "product_name": "Raid Volume", 00:18:13.367 "block_size": 512, 00:18:13.367 "num_blocks": 190464, 00:18:13.367 "uuid": "8fd225ca-9594-41f9-b56d-c6b57742c2f0", 00:18:13.367 "assigned_rate_limits": { 00:18:13.367 "rw_ios_per_sec": 0, 00:18:13.367 "rw_mbytes_per_sec": 0, 00:18:13.367 "r_mbytes_per_sec": 0, 00:18:13.367 "w_mbytes_per_sec": 0 00:18:13.367 }, 00:18:13.367 "claimed": false, 00:18:13.367 "zoned": false, 00:18:13.367 "supported_io_types": { 00:18:13.367 "read": true, 00:18:13.367 "write": true, 00:18:13.367 "unmap": true, 00:18:13.367 "flush": true, 00:18:13.367 "reset": true, 00:18:13.367 "nvme_admin": false, 00:18:13.367 "nvme_io": false, 00:18:13.367 "nvme_io_md": false, 00:18:13.367 "write_zeroes": true, 00:18:13.367 "zcopy": false, 00:18:13.368 "get_zone_info": false, 00:18:13.368 "zone_management": false, 00:18:13.368 "zone_append": false, 00:18:13.368 "compare": false, 00:18:13.368 "compare_and_write": false, 00:18:13.368 "abort": false, 00:18:13.368 "seek_hole": false, 00:18:13.368 "seek_data": false, 00:18:13.368 "copy": false, 00:18:13.368 "nvme_iov_md": false 00:18:13.368 }, 00:18:13.368 "memory_domains": [ 00:18:13.368 { 00:18:13.368 "dma_device_id": "system", 00:18:13.368 "dma_device_type": 1 00:18:13.368 }, 00:18:13.368 { 00:18:13.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.368 "dma_device_type": 2 00:18:13.368 }, 00:18:13.368 { 00:18:13.368 "dma_device_id": "system", 00:18:13.368 "dma_device_type": 1 00:18:13.368 }, 00:18:13.368 { 00:18:13.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.368 "dma_device_type": 2 00:18:13.368 }, 00:18:13.368 { 00:18:13.368 "dma_device_id": "system", 00:18:13.368 "dma_device_type": 1 00:18:13.368 }, 00:18:13.368 { 00:18:13.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.368 "dma_device_type": 2 00:18:13.368 } 00:18:13.368 ], 00:18:13.368 "driver_specific": { 00:18:13.368 "raid": { 00:18:13.368 "uuid": "8fd225ca-9594-41f9-b56d-c6b57742c2f0", 00:18:13.368 "strip_size_kb": 64, 00:18:13.368 "state": "online", 00:18:13.368 "raid_level": "raid0", 00:18:13.368 "superblock": true, 00:18:13.368 "num_base_bdevs": 3, 00:18:13.368 "num_base_bdevs_discovered": 3, 00:18:13.368 "num_base_bdevs_operational": 3, 00:18:13.368 "base_bdevs_list": [ 00:18:13.368 { 00:18:13.368 "name": "BaseBdev1", 00:18:13.368 "uuid": "deec55f4-284e-488f-b90e-6dd454fd7972", 00:18:13.368 "is_configured": true, 00:18:13.368 "data_offset": 2048, 00:18:13.368 "data_size": 63488 00:18:13.368 }, 00:18:13.368 { 00:18:13.368 "name": "BaseBdev2", 00:18:13.368 "uuid": "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a", 00:18:13.368 "is_configured": true, 00:18:13.368 "data_offset": 2048, 00:18:13.368 "data_size": 63488 00:18:13.368 }, 00:18:13.368 { 00:18:13.368 "name": "BaseBdev3", 00:18:13.368 "uuid": "12578ab2-3d37-47c0-8678-8d0173625457", 00:18:13.368 "is_configured": true, 00:18:13.368 "data_offset": 2048, 00:18:13.368 "data_size": 63488 00:18:13.368 } 00:18:13.368 ] 00:18:13.368 } 00:18:13.368 } 00:18:13.368 }' 00:18:13.368 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.368 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:13.368 BaseBdev2 00:18:13.368 BaseBdev3' 00:18:13.368 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.368 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:13.368 18:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.627 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.627 "name": "BaseBdev1", 00:18:13.627 "aliases": [ 00:18:13.627 "deec55f4-284e-488f-b90e-6dd454fd7972" 00:18:13.627 ], 00:18:13.627 "product_name": "Malloc disk", 00:18:13.627 "block_size": 512, 00:18:13.627 "num_blocks": 65536, 00:18:13.627 "uuid": "deec55f4-284e-488f-b90e-6dd454fd7972", 00:18:13.627 "assigned_rate_limits": { 00:18:13.627 "rw_ios_per_sec": 0, 00:18:13.627 "rw_mbytes_per_sec": 0, 00:18:13.627 "r_mbytes_per_sec": 0, 00:18:13.627 "w_mbytes_per_sec": 0 00:18:13.627 }, 00:18:13.627 "claimed": true, 00:18:13.627 "claim_type": "exclusive_write", 00:18:13.627 "zoned": false, 00:18:13.627 "supported_io_types": { 00:18:13.627 "read": true, 00:18:13.627 "write": true, 00:18:13.627 "unmap": true, 00:18:13.627 "flush": true, 00:18:13.627 "reset": true, 00:18:13.627 "nvme_admin": false, 00:18:13.627 "nvme_io": false, 00:18:13.627 "nvme_io_md": false, 00:18:13.627 "write_zeroes": true, 00:18:13.627 "zcopy": true, 00:18:13.627 "get_zone_info": false, 00:18:13.627 "zone_management": false, 00:18:13.627 "zone_append": false, 00:18:13.627 "compare": false, 00:18:13.627 "compare_and_write": false, 00:18:13.627 "abort": true, 00:18:13.627 "seek_hole": false, 00:18:13.627 "seek_data": false, 00:18:13.627 "copy": true, 00:18:13.627 "nvme_iov_md": false 00:18:13.627 }, 00:18:13.627 "memory_domains": [ 00:18:13.627 { 00:18:13.627 "dma_device_id": "system", 00:18:13.627 "dma_device_type": 1 00:18:13.627 }, 00:18:13.627 { 00:18:13.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.627 "dma_device_type": 2 00:18:13.627 } 00:18:13.627 ], 00:18:13.627 "driver_specific": {} 00:18:13.627 }' 00:18:13.627 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.627 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.627 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.627 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.627 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.627 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:13.886 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.144 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.144 "name": "BaseBdev2", 00:18:14.144 "aliases": [ 00:18:14.144 "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a" 00:18:14.144 ], 00:18:14.144 "product_name": "Malloc disk", 00:18:14.144 "block_size": 512, 00:18:14.144 "num_blocks": 65536, 00:18:14.144 "uuid": "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a", 00:18:14.144 "assigned_rate_limits": { 00:18:14.144 "rw_ios_per_sec": 0, 00:18:14.144 "rw_mbytes_per_sec": 0, 00:18:14.144 "r_mbytes_per_sec": 0, 00:18:14.144 "w_mbytes_per_sec": 0 00:18:14.144 }, 00:18:14.144 "claimed": true, 00:18:14.144 "claim_type": "exclusive_write", 00:18:14.144 "zoned": false, 00:18:14.144 "supported_io_types": { 00:18:14.144 "read": true, 00:18:14.144 "write": true, 00:18:14.144 "unmap": true, 00:18:14.144 "flush": true, 00:18:14.144 "reset": true, 00:18:14.144 "nvme_admin": false, 00:18:14.144 "nvme_io": false, 00:18:14.144 "nvme_io_md": false, 00:18:14.144 "write_zeroes": true, 00:18:14.144 "zcopy": true, 00:18:14.144 "get_zone_info": false, 00:18:14.144 "zone_management": false, 00:18:14.144 "zone_append": false, 00:18:14.144 "compare": false, 00:18:14.144 "compare_and_write": false, 00:18:14.144 "abort": true, 00:18:14.144 "seek_hole": false, 00:18:14.144 "seek_data": false, 00:18:14.144 "copy": true, 00:18:14.144 "nvme_iov_md": false 00:18:14.144 }, 00:18:14.144 "memory_domains": [ 00:18:14.144 { 00:18:14.144 "dma_device_id": "system", 00:18:14.144 "dma_device_type": 1 00:18:14.144 }, 00:18:14.144 { 00:18:14.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.144 "dma_device_type": 2 00:18:14.144 } 00:18:14.144 ], 00:18:14.144 "driver_specific": {} 00:18:14.144 }' 00:18:14.144 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.144 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:14.403 18:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.662 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.663 "name": "BaseBdev3", 00:18:14.663 "aliases": [ 00:18:14.663 "12578ab2-3d37-47c0-8678-8d0173625457" 00:18:14.663 ], 00:18:14.663 "product_name": "Malloc disk", 00:18:14.663 "block_size": 512, 00:18:14.663 "num_blocks": 65536, 00:18:14.663 "uuid": "12578ab2-3d37-47c0-8678-8d0173625457", 00:18:14.663 "assigned_rate_limits": { 00:18:14.663 "rw_ios_per_sec": 0, 00:18:14.663 "rw_mbytes_per_sec": 0, 00:18:14.663 "r_mbytes_per_sec": 0, 00:18:14.663 "w_mbytes_per_sec": 0 00:18:14.663 }, 00:18:14.663 "claimed": true, 00:18:14.663 "claim_type": "exclusive_write", 00:18:14.663 "zoned": false, 00:18:14.663 "supported_io_types": { 00:18:14.663 "read": true, 00:18:14.663 "write": true, 00:18:14.663 "unmap": true, 00:18:14.663 "flush": true, 00:18:14.663 "reset": true, 00:18:14.663 "nvme_admin": false, 00:18:14.663 "nvme_io": false, 00:18:14.663 "nvme_io_md": false, 00:18:14.663 "write_zeroes": true, 00:18:14.663 "zcopy": true, 00:18:14.663 "get_zone_info": false, 00:18:14.663 "zone_management": false, 00:18:14.663 "zone_append": false, 00:18:14.663 "compare": false, 00:18:14.663 "compare_and_write": false, 00:18:14.663 "abort": true, 00:18:14.663 "seek_hole": false, 00:18:14.663 "seek_data": false, 00:18:14.663 "copy": true, 00:18:14.663 "nvme_iov_md": false 00:18:14.663 }, 00:18:14.663 "memory_domains": [ 00:18:14.663 { 00:18:14.663 "dma_device_id": "system", 00:18:14.663 "dma_device_type": 1 00:18:14.663 }, 00:18:14.663 { 00:18:14.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.663 "dma_device_type": 2 00:18:14.663 } 00:18:14.663 ], 00:18:14.663 "driver_specific": {} 00:18:14.663 }' 00:18:14.663 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.663 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.663 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.663 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.921 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:15.180 [2024-07-25 18:45:15.690566] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.180 [2024-07-25 18:45:15.690607] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.180 [2024-07-25 18:45:15.690661] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.438 18:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.696 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.696 "name": "Existed_Raid", 00:18:15.696 "uuid": "8fd225ca-9594-41f9-b56d-c6b57742c2f0", 00:18:15.696 "strip_size_kb": 64, 00:18:15.696 "state": "offline", 00:18:15.696 "raid_level": "raid0", 00:18:15.696 "superblock": true, 00:18:15.697 "num_base_bdevs": 3, 00:18:15.697 "num_base_bdevs_discovered": 2, 00:18:15.697 "num_base_bdevs_operational": 2, 00:18:15.697 "base_bdevs_list": [ 00:18:15.697 { 00:18:15.697 "name": null, 00:18:15.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.697 "is_configured": false, 00:18:15.697 "data_offset": 2048, 00:18:15.697 "data_size": 63488 00:18:15.697 }, 00:18:15.697 { 00:18:15.697 "name": "BaseBdev2", 00:18:15.697 "uuid": "a989cb9d-87d6-40d1-a5e3-7b5347e92f3a", 00:18:15.697 "is_configured": true, 00:18:15.697 "data_offset": 2048, 00:18:15.697 "data_size": 63488 00:18:15.697 }, 00:18:15.697 { 00:18:15.697 "name": "BaseBdev3", 00:18:15.697 "uuid": "12578ab2-3d37-47c0-8678-8d0173625457", 00:18:15.697 "is_configured": true, 00:18:15.697 "data_offset": 2048, 00:18:15.697 "data_size": 63488 00:18:15.697 } 00:18:15.697 ] 00:18:15.697 }' 00:18:15.697 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.697 18:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.263 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:16.263 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:16.263 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.263 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:16.521 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:16.521 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.521 18:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:16.779 [2024-07-25 18:45:17.164674] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.779 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:16.779 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:16.779 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.779 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:17.037 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:17.037 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.037 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:17.295 [2024-07-25 18:45:17.765575] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:17.295 [2024-07-25 18:45:17.765664] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:18:17.295 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:17.295 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:17.295 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.295 18:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.553 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:17.553 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:17.553 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:17.553 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:17.553 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:17.553 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:17.811 BaseBdev2 00:18:17.811 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:17.811 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:17.811 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:17.811 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:17.811 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:17.811 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:17.811 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.070 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.330 [ 00:18:18.330 { 00:18:18.330 "name": "BaseBdev2", 00:18:18.330 "aliases": [ 00:18:18.330 "06d15394-74bb-4391-8d2e-ab0e6607aa18" 00:18:18.330 ], 00:18:18.330 "product_name": "Malloc disk", 00:18:18.330 "block_size": 512, 00:18:18.330 "num_blocks": 65536, 00:18:18.330 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:18.330 "assigned_rate_limits": { 00:18:18.330 "rw_ios_per_sec": 0, 00:18:18.330 "rw_mbytes_per_sec": 0, 00:18:18.330 "r_mbytes_per_sec": 0, 00:18:18.330 "w_mbytes_per_sec": 0 00:18:18.330 }, 00:18:18.330 "claimed": false, 00:18:18.330 "zoned": false, 00:18:18.330 "supported_io_types": { 00:18:18.330 "read": true, 00:18:18.330 "write": true, 00:18:18.330 "unmap": true, 00:18:18.330 "flush": true, 00:18:18.330 "reset": true, 00:18:18.330 "nvme_admin": false, 00:18:18.330 "nvme_io": false, 00:18:18.330 "nvme_io_md": false, 00:18:18.330 "write_zeroes": true, 00:18:18.330 "zcopy": true, 00:18:18.330 "get_zone_info": false, 00:18:18.330 "zone_management": false, 00:18:18.330 "zone_append": false, 00:18:18.330 "compare": false, 00:18:18.330 "compare_and_write": false, 00:18:18.330 "abort": true, 00:18:18.330 "seek_hole": false, 00:18:18.330 "seek_data": false, 00:18:18.330 "copy": true, 00:18:18.330 "nvme_iov_md": false 00:18:18.330 }, 00:18:18.330 "memory_domains": [ 00:18:18.330 { 00:18:18.330 "dma_device_id": "system", 00:18:18.330 "dma_device_type": 1 00:18:18.330 }, 00:18:18.330 { 00:18:18.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.330 "dma_device_type": 2 00:18:18.330 } 00:18:18.330 ], 00:18:18.330 "driver_specific": {} 00:18:18.330 } 00:18:18.330 ] 00:18:18.330 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:18.330 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:18.330 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:18.330 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:18.588 BaseBdev3 00:18:18.588 18:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:18.588 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:18.588 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:18.588 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:18.588 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:18.588 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:18.588 18:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.588 18:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:18.847 [ 00:18:18.847 { 00:18:18.847 "name": "BaseBdev3", 00:18:18.847 "aliases": [ 00:18:18.847 "023ac0b2-2037-4aa3-b973-7610cbd8186c" 00:18:18.847 ], 00:18:18.847 "product_name": "Malloc disk", 00:18:18.847 "block_size": 512, 00:18:18.847 "num_blocks": 65536, 00:18:18.847 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:18.847 "assigned_rate_limits": { 00:18:18.847 "rw_ios_per_sec": 0, 00:18:18.847 "rw_mbytes_per_sec": 0, 00:18:18.847 "r_mbytes_per_sec": 0, 00:18:18.847 "w_mbytes_per_sec": 0 00:18:18.847 }, 00:18:18.847 "claimed": false, 00:18:18.847 "zoned": false, 00:18:18.847 "supported_io_types": { 00:18:18.847 "read": true, 00:18:18.847 "write": true, 00:18:18.847 "unmap": true, 00:18:18.847 "flush": true, 00:18:18.847 "reset": true, 00:18:18.847 "nvme_admin": false, 00:18:18.847 "nvme_io": false, 00:18:18.847 "nvme_io_md": false, 00:18:18.847 "write_zeroes": true, 00:18:18.847 "zcopy": true, 00:18:18.847 "get_zone_info": false, 00:18:18.847 "zone_management": false, 00:18:18.847 "zone_append": false, 00:18:18.847 "compare": false, 00:18:18.847 "compare_and_write": false, 00:18:18.847 "abort": true, 00:18:18.847 "seek_hole": false, 00:18:18.847 "seek_data": false, 00:18:18.847 "copy": true, 00:18:18.847 "nvme_iov_md": false 00:18:18.847 }, 00:18:18.847 "memory_domains": [ 00:18:18.847 { 00:18:18.847 "dma_device_id": "system", 00:18:18.847 "dma_device_type": 1 00:18:18.847 }, 00:18:18.847 { 00:18:18.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.847 "dma_device_type": 2 00:18:18.847 } 00:18:18.847 ], 00:18:18.847 "driver_specific": {} 00:18:18.847 } 00:18:18.847 ] 00:18:18.847 18:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:18.847 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:18.847 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:18.847 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:19.106 [2024-07-25 18:45:19.525839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.106 [2024-07-25 18:45:19.525944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.106 [2024-07-25 18:45:19.525997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:19.107 [2024-07-25 18:45:19.528369] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.107 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.366 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.366 "name": "Existed_Raid", 00:18:19.366 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:19.366 "strip_size_kb": 64, 00:18:19.366 "state": "configuring", 00:18:19.366 "raid_level": "raid0", 00:18:19.366 "superblock": true, 00:18:19.366 "num_base_bdevs": 3, 00:18:19.366 "num_base_bdevs_discovered": 2, 00:18:19.366 "num_base_bdevs_operational": 3, 00:18:19.366 "base_bdevs_list": [ 00:18:19.366 { 00:18:19.366 "name": "BaseBdev1", 00:18:19.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.366 "is_configured": false, 00:18:19.366 "data_offset": 0, 00:18:19.366 "data_size": 0 00:18:19.366 }, 00:18:19.366 { 00:18:19.366 "name": "BaseBdev2", 00:18:19.366 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:19.366 "is_configured": true, 00:18:19.366 "data_offset": 2048, 00:18:19.366 "data_size": 63488 00:18:19.366 }, 00:18:19.366 { 00:18:19.366 "name": "BaseBdev3", 00:18:19.366 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:19.366 "is_configured": true, 00:18:19.366 "data_offset": 2048, 00:18:19.366 "data_size": 63488 00:18:19.366 } 00:18:19.366 ] 00:18:19.366 }' 00:18:19.366 18:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.366 18:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.934 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:20.193 [2024-07-25 18:45:20.586028] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.193 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.194 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.194 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.194 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.194 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.453 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.453 "name": "Existed_Raid", 00:18:20.453 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:20.453 "strip_size_kb": 64, 00:18:20.453 "state": "configuring", 00:18:20.453 "raid_level": "raid0", 00:18:20.453 "superblock": true, 00:18:20.453 "num_base_bdevs": 3, 00:18:20.453 "num_base_bdevs_discovered": 1, 00:18:20.453 "num_base_bdevs_operational": 3, 00:18:20.453 "base_bdevs_list": [ 00:18:20.453 { 00:18:20.453 "name": "BaseBdev1", 00:18:20.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.453 "is_configured": false, 00:18:20.453 "data_offset": 0, 00:18:20.453 "data_size": 0 00:18:20.453 }, 00:18:20.453 { 00:18:20.453 "name": null, 00:18:20.453 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:20.453 "is_configured": false, 00:18:20.453 "data_offset": 2048, 00:18:20.453 "data_size": 63488 00:18:20.453 }, 00:18:20.453 { 00:18:20.453 "name": "BaseBdev3", 00:18:20.453 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:20.453 "is_configured": true, 00:18:20.453 "data_offset": 2048, 00:18:20.453 "data_size": 63488 00:18:20.453 } 00:18:20.453 ] 00:18:20.453 }' 00:18:20.453 18:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.453 18:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.020 18:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.020 18:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:21.279 18:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:21.279 18:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:21.537 [2024-07-25 18:45:21.995380] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.537 BaseBdev1 00:18:21.537 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:21.537 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:21.537 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:21.537 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:21.537 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:21.537 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:21.537 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:21.795 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:21.795 [ 00:18:21.795 { 00:18:21.795 "name": "BaseBdev1", 00:18:21.795 "aliases": [ 00:18:21.795 "c751c750-fe7b-420c-b845-21808467e2b0" 00:18:21.795 ], 00:18:21.795 "product_name": "Malloc disk", 00:18:21.795 "block_size": 512, 00:18:21.795 "num_blocks": 65536, 00:18:21.795 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:21.795 "assigned_rate_limits": { 00:18:21.795 "rw_ios_per_sec": 0, 00:18:21.795 "rw_mbytes_per_sec": 0, 00:18:21.795 "r_mbytes_per_sec": 0, 00:18:21.795 "w_mbytes_per_sec": 0 00:18:21.795 }, 00:18:21.795 "claimed": true, 00:18:21.795 "claim_type": "exclusive_write", 00:18:21.795 "zoned": false, 00:18:21.795 "supported_io_types": { 00:18:21.795 "read": true, 00:18:21.795 "write": true, 00:18:21.795 "unmap": true, 00:18:21.795 "flush": true, 00:18:21.795 "reset": true, 00:18:21.795 "nvme_admin": false, 00:18:21.795 "nvme_io": false, 00:18:21.795 "nvme_io_md": false, 00:18:21.795 "write_zeroes": true, 00:18:21.795 "zcopy": true, 00:18:21.795 "get_zone_info": false, 00:18:21.795 "zone_management": false, 00:18:21.795 "zone_append": false, 00:18:21.795 "compare": false, 00:18:21.795 "compare_and_write": false, 00:18:21.795 "abort": true, 00:18:21.795 "seek_hole": false, 00:18:21.795 "seek_data": false, 00:18:21.795 "copy": true, 00:18:21.795 "nvme_iov_md": false 00:18:21.795 }, 00:18:21.795 "memory_domains": [ 00:18:21.795 { 00:18:21.795 "dma_device_id": "system", 00:18:21.795 "dma_device_type": 1 00:18:21.795 }, 00:18:21.795 { 00:18:21.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.795 "dma_device_type": 2 00:18:21.795 } 00:18:21.795 ], 00:18:21.795 "driver_specific": {} 00:18:21.795 } 00:18:21.795 ] 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.054 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.313 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.313 "name": "Existed_Raid", 00:18:22.313 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:22.313 "strip_size_kb": 64, 00:18:22.313 "state": "configuring", 00:18:22.313 "raid_level": "raid0", 00:18:22.313 "superblock": true, 00:18:22.313 "num_base_bdevs": 3, 00:18:22.313 "num_base_bdevs_discovered": 2, 00:18:22.313 "num_base_bdevs_operational": 3, 00:18:22.313 "base_bdevs_list": [ 00:18:22.313 { 00:18:22.313 "name": "BaseBdev1", 00:18:22.313 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:22.313 "is_configured": true, 00:18:22.313 "data_offset": 2048, 00:18:22.313 "data_size": 63488 00:18:22.313 }, 00:18:22.313 { 00:18:22.313 "name": null, 00:18:22.313 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:22.313 "is_configured": false, 00:18:22.313 "data_offset": 2048, 00:18:22.313 "data_size": 63488 00:18:22.313 }, 00:18:22.313 { 00:18:22.313 "name": "BaseBdev3", 00:18:22.313 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:22.313 "is_configured": true, 00:18:22.313 "data_offset": 2048, 00:18:22.313 "data_size": 63488 00:18:22.313 } 00:18:22.313 ] 00:18:22.313 }' 00:18:22.313 18:45:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.313 18:45:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.571 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.571 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:22.828 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:22.828 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:23.085 [2024-07-25 18:45:23.556283] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.085 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.344 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.344 "name": "Existed_Raid", 00:18:23.344 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:23.344 "strip_size_kb": 64, 00:18:23.344 "state": "configuring", 00:18:23.344 "raid_level": "raid0", 00:18:23.344 "superblock": true, 00:18:23.344 "num_base_bdevs": 3, 00:18:23.344 "num_base_bdevs_discovered": 1, 00:18:23.344 "num_base_bdevs_operational": 3, 00:18:23.344 "base_bdevs_list": [ 00:18:23.344 { 00:18:23.344 "name": "BaseBdev1", 00:18:23.344 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:23.344 "is_configured": true, 00:18:23.344 "data_offset": 2048, 00:18:23.344 "data_size": 63488 00:18:23.344 }, 00:18:23.344 { 00:18:23.344 "name": null, 00:18:23.344 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:23.344 "is_configured": false, 00:18:23.344 "data_offset": 2048, 00:18:23.344 "data_size": 63488 00:18:23.344 }, 00:18:23.344 { 00:18:23.344 "name": null, 00:18:23.344 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:23.344 "is_configured": false, 00:18:23.344 "data_offset": 2048, 00:18:23.344 "data_size": 63488 00:18:23.344 } 00:18:23.344 ] 00:18:23.344 }' 00:18:23.344 18:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.344 18:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.912 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.912 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:24.170 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:24.170 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:24.427 [2024-07-25 18:45:24.888548] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:24.427 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:24.427 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.427 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.428 18:45:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.686 18:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.686 "name": "Existed_Raid", 00:18:24.686 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:24.686 "strip_size_kb": 64, 00:18:24.686 "state": "configuring", 00:18:24.686 "raid_level": "raid0", 00:18:24.686 "superblock": true, 00:18:24.686 "num_base_bdevs": 3, 00:18:24.686 "num_base_bdevs_discovered": 2, 00:18:24.686 "num_base_bdevs_operational": 3, 00:18:24.686 "base_bdevs_list": [ 00:18:24.686 { 00:18:24.686 "name": "BaseBdev1", 00:18:24.686 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:24.686 "is_configured": true, 00:18:24.686 "data_offset": 2048, 00:18:24.686 "data_size": 63488 00:18:24.686 }, 00:18:24.686 { 00:18:24.686 "name": null, 00:18:24.686 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:24.686 "is_configured": false, 00:18:24.686 "data_offset": 2048, 00:18:24.686 "data_size": 63488 00:18:24.686 }, 00:18:24.686 { 00:18:24.686 "name": "BaseBdev3", 00:18:24.686 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:24.686 "is_configured": true, 00:18:24.686 "data_offset": 2048, 00:18:24.686 "data_size": 63488 00:18:24.686 } 00:18:24.686 ] 00:18:24.686 }' 00:18:24.686 18:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.686 18:45:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.253 18:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:25.253 18:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.511 18:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:25.511 18:45:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:25.511 [2024-07-25 18:45:26.048813] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.770 "name": "Existed_Raid", 00:18:25.770 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:25.770 "strip_size_kb": 64, 00:18:25.770 "state": "configuring", 00:18:25.770 "raid_level": "raid0", 00:18:25.770 "superblock": true, 00:18:25.770 "num_base_bdevs": 3, 00:18:25.770 "num_base_bdevs_discovered": 1, 00:18:25.770 "num_base_bdevs_operational": 3, 00:18:25.770 "base_bdevs_list": [ 00:18:25.770 { 00:18:25.770 "name": null, 00:18:25.770 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:25.770 "is_configured": false, 00:18:25.770 "data_offset": 2048, 00:18:25.770 "data_size": 63488 00:18:25.770 }, 00:18:25.770 { 00:18:25.770 "name": null, 00:18:25.770 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:25.770 "is_configured": false, 00:18:25.770 "data_offset": 2048, 00:18:25.770 "data_size": 63488 00:18:25.770 }, 00:18:25.770 { 00:18:25.770 "name": "BaseBdev3", 00:18:25.770 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:25.770 "is_configured": true, 00:18:25.770 "data_offset": 2048, 00:18:25.770 "data_size": 63488 00:18:25.770 } 00:18:25.770 ] 00:18:25.770 }' 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.770 18:45:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.706 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.706 18:45:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:26.706 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:26.706 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:26.706 [2024-07-25 18:45:27.280145] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.965 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.224 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.224 "name": "Existed_Raid", 00:18:27.224 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:27.224 "strip_size_kb": 64, 00:18:27.224 "state": "configuring", 00:18:27.224 "raid_level": "raid0", 00:18:27.224 "superblock": true, 00:18:27.224 "num_base_bdevs": 3, 00:18:27.224 "num_base_bdevs_discovered": 2, 00:18:27.224 "num_base_bdevs_operational": 3, 00:18:27.224 "base_bdevs_list": [ 00:18:27.224 { 00:18:27.224 "name": null, 00:18:27.224 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:27.224 "is_configured": false, 00:18:27.224 "data_offset": 2048, 00:18:27.224 "data_size": 63488 00:18:27.224 }, 00:18:27.224 { 00:18:27.224 "name": "BaseBdev2", 00:18:27.224 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:27.224 "is_configured": true, 00:18:27.224 "data_offset": 2048, 00:18:27.224 "data_size": 63488 00:18:27.224 }, 00:18:27.224 { 00:18:27.224 "name": "BaseBdev3", 00:18:27.224 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:27.224 "is_configured": true, 00:18:27.224 "data_offset": 2048, 00:18:27.224 "data_size": 63488 00:18:27.224 } 00:18:27.224 ] 00:18:27.224 }' 00:18:27.224 18:45:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.224 18:45:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 18:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.791 18:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:27.792 18:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:27.792 18:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.792 18:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:28.050 18:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c751c750-fe7b-420c-b845-21808467e2b0 00:18:28.308 [2024-07-25 18:45:28.742048] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:28.308 [2024-07-25 18:45:28.742264] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:18:28.308 [2024-07-25 18:45:28.742276] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:28.308 [2024-07-25 18:45:28.742379] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:28.308 [2024-07-25 18:45:28.742689] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:18:28.308 [2024-07-25 18:45:28.742700] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:18:28.308 [2024-07-25 18:45:28.742837] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.308 NewBaseBdev 00:18:28.308 18:45:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:28.308 18:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:28.308 18:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:28.308 18:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:28.308 18:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:28.308 18:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:28.308 18:45:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.566 18:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:28.825 [ 00:18:28.825 { 00:18:28.825 "name": "NewBaseBdev", 00:18:28.825 "aliases": [ 00:18:28.825 "c751c750-fe7b-420c-b845-21808467e2b0" 00:18:28.825 ], 00:18:28.825 "product_name": "Malloc disk", 00:18:28.825 "block_size": 512, 00:18:28.825 "num_blocks": 65536, 00:18:28.825 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:28.825 "assigned_rate_limits": { 00:18:28.825 "rw_ios_per_sec": 0, 00:18:28.825 "rw_mbytes_per_sec": 0, 00:18:28.825 "r_mbytes_per_sec": 0, 00:18:28.825 "w_mbytes_per_sec": 0 00:18:28.825 }, 00:18:28.825 "claimed": true, 00:18:28.825 "claim_type": "exclusive_write", 00:18:28.825 "zoned": false, 00:18:28.825 "supported_io_types": { 00:18:28.825 "read": true, 00:18:28.825 "write": true, 00:18:28.825 "unmap": true, 00:18:28.825 "flush": true, 00:18:28.825 "reset": true, 00:18:28.825 "nvme_admin": false, 00:18:28.825 "nvme_io": false, 00:18:28.825 "nvme_io_md": false, 00:18:28.825 "write_zeroes": true, 00:18:28.825 "zcopy": true, 00:18:28.825 "get_zone_info": false, 00:18:28.825 "zone_management": false, 00:18:28.825 "zone_append": false, 00:18:28.825 "compare": false, 00:18:28.825 "compare_and_write": false, 00:18:28.825 "abort": true, 00:18:28.825 "seek_hole": false, 00:18:28.825 "seek_data": false, 00:18:28.825 "copy": true, 00:18:28.825 "nvme_iov_md": false 00:18:28.825 }, 00:18:28.825 "memory_domains": [ 00:18:28.825 { 00:18:28.825 "dma_device_id": "system", 00:18:28.825 "dma_device_type": 1 00:18:28.825 }, 00:18:28.825 { 00:18:28.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.825 "dma_device_type": 2 00:18:28.825 } 00:18:28.825 ], 00:18:28.825 "driver_specific": {} 00:18:28.825 } 00:18:28.825 ] 00:18:28.825 18:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:28.825 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:28.825 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.825 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.826 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.084 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.084 "name": "Existed_Raid", 00:18:29.084 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:29.084 "strip_size_kb": 64, 00:18:29.084 "state": "online", 00:18:29.084 "raid_level": "raid0", 00:18:29.084 "superblock": true, 00:18:29.084 "num_base_bdevs": 3, 00:18:29.084 "num_base_bdevs_discovered": 3, 00:18:29.084 "num_base_bdevs_operational": 3, 00:18:29.084 "base_bdevs_list": [ 00:18:29.084 { 00:18:29.084 "name": "NewBaseBdev", 00:18:29.084 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:29.084 "is_configured": true, 00:18:29.084 "data_offset": 2048, 00:18:29.084 "data_size": 63488 00:18:29.084 }, 00:18:29.084 { 00:18:29.084 "name": "BaseBdev2", 00:18:29.084 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:29.084 "is_configured": true, 00:18:29.084 "data_offset": 2048, 00:18:29.084 "data_size": 63488 00:18:29.084 }, 00:18:29.084 { 00:18:29.084 "name": "BaseBdev3", 00:18:29.084 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:29.084 "is_configured": true, 00:18:29.084 "data_offset": 2048, 00:18:29.084 "data_size": 63488 00:18:29.084 } 00:18:29.084 ] 00:18:29.084 }' 00:18:29.084 18:45:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.084 18:45:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:29.652 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:29.911 [2024-07-25 18:45:30.274663] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.911 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:29.911 "name": "Existed_Raid", 00:18:29.911 "aliases": [ 00:18:29.911 "61a9b78d-a82d-4944-81e0-55cb8e320dd3" 00:18:29.911 ], 00:18:29.911 "product_name": "Raid Volume", 00:18:29.911 "block_size": 512, 00:18:29.911 "num_blocks": 190464, 00:18:29.911 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:29.911 "assigned_rate_limits": { 00:18:29.911 "rw_ios_per_sec": 0, 00:18:29.911 "rw_mbytes_per_sec": 0, 00:18:29.911 "r_mbytes_per_sec": 0, 00:18:29.911 "w_mbytes_per_sec": 0 00:18:29.911 }, 00:18:29.911 "claimed": false, 00:18:29.911 "zoned": false, 00:18:29.911 "supported_io_types": { 00:18:29.911 "read": true, 00:18:29.911 "write": true, 00:18:29.911 "unmap": true, 00:18:29.911 "flush": true, 00:18:29.911 "reset": true, 00:18:29.911 "nvme_admin": false, 00:18:29.911 "nvme_io": false, 00:18:29.911 "nvme_io_md": false, 00:18:29.911 "write_zeroes": true, 00:18:29.911 "zcopy": false, 00:18:29.911 "get_zone_info": false, 00:18:29.911 "zone_management": false, 00:18:29.911 "zone_append": false, 00:18:29.911 "compare": false, 00:18:29.911 "compare_and_write": false, 00:18:29.911 "abort": false, 00:18:29.911 "seek_hole": false, 00:18:29.911 "seek_data": false, 00:18:29.911 "copy": false, 00:18:29.911 "nvme_iov_md": false 00:18:29.911 }, 00:18:29.911 "memory_domains": [ 00:18:29.911 { 00:18:29.911 "dma_device_id": "system", 00:18:29.911 "dma_device_type": 1 00:18:29.911 }, 00:18:29.911 { 00:18:29.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.911 "dma_device_type": 2 00:18:29.911 }, 00:18:29.911 { 00:18:29.911 "dma_device_id": "system", 00:18:29.911 "dma_device_type": 1 00:18:29.911 }, 00:18:29.911 { 00:18:29.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.911 "dma_device_type": 2 00:18:29.911 }, 00:18:29.911 { 00:18:29.911 "dma_device_id": "system", 00:18:29.911 "dma_device_type": 1 00:18:29.911 }, 00:18:29.911 { 00:18:29.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.911 "dma_device_type": 2 00:18:29.911 } 00:18:29.911 ], 00:18:29.911 "driver_specific": { 00:18:29.911 "raid": { 00:18:29.911 "uuid": "61a9b78d-a82d-4944-81e0-55cb8e320dd3", 00:18:29.911 "strip_size_kb": 64, 00:18:29.911 "state": "online", 00:18:29.911 "raid_level": "raid0", 00:18:29.911 "superblock": true, 00:18:29.911 "num_base_bdevs": 3, 00:18:29.911 "num_base_bdevs_discovered": 3, 00:18:29.911 "num_base_bdevs_operational": 3, 00:18:29.911 "base_bdevs_list": [ 00:18:29.911 { 00:18:29.911 "name": "NewBaseBdev", 00:18:29.911 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:29.911 "is_configured": true, 00:18:29.911 "data_offset": 2048, 00:18:29.911 "data_size": 63488 00:18:29.911 }, 00:18:29.911 { 00:18:29.911 "name": "BaseBdev2", 00:18:29.911 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:29.911 "is_configured": true, 00:18:29.911 "data_offset": 2048, 00:18:29.911 "data_size": 63488 00:18:29.911 }, 00:18:29.911 { 00:18:29.911 "name": "BaseBdev3", 00:18:29.911 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:29.911 "is_configured": true, 00:18:29.911 "data_offset": 2048, 00:18:29.911 "data_size": 63488 00:18:29.911 } 00:18:29.911 ] 00:18:29.911 } 00:18:29.911 } 00:18:29.911 }' 00:18:29.911 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.911 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:29.911 BaseBdev2 00:18:29.911 BaseBdev3' 00:18:29.911 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:29.911 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:29.911 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:30.170 "name": "NewBaseBdev", 00:18:30.170 "aliases": [ 00:18:30.170 "c751c750-fe7b-420c-b845-21808467e2b0" 00:18:30.170 ], 00:18:30.170 "product_name": "Malloc disk", 00:18:30.170 "block_size": 512, 00:18:30.170 "num_blocks": 65536, 00:18:30.170 "uuid": "c751c750-fe7b-420c-b845-21808467e2b0", 00:18:30.170 "assigned_rate_limits": { 00:18:30.170 "rw_ios_per_sec": 0, 00:18:30.170 "rw_mbytes_per_sec": 0, 00:18:30.170 "r_mbytes_per_sec": 0, 00:18:30.170 "w_mbytes_per_sec": 0 00:18:30.170 }, 00:18:30.170 "claimed": true, 00:18:30.170 "claim_type": "exclusive_write", 00:18:30.170 "zoned": false, 00:18:30.170 "supported_io_types": { 00:18:30.170 "read": true, 00:18:30.170 "write": true, 00:18:30.170 "unmap": true, 00:18:30.170 "flush": true, 00:18:30.170 "reset": true, 00:18:30.170 "nvme_admin": false, 00:18:30.170 "nvme_io": false, 00:18:30.170 "nvme_io_md": false, 00:18:30.170 "write_zeroes": true, 00:18:30.170 "zcopy": true, 00:18:30.170 "get_zone_info": false, 00:18:30.170 "zone_management": false, 00:18:30.170 "zone_append": false, 00:18:30.170 "compare": false, 00:18:30.170 "compare_and_write": false, 00:18:30.170 "abort": true, 00:18:30.170 "seek_hole": false, 00:18:30.170 "seek_data": false, 00:18:30.170 "copy": true, 00:18:30.170 "nvme_iov_md": false 00:18:30.170 }, 00:18:30.170 "memory_domains": [ 00:18:30.170 { 00:18:30.170 "dma_device_id": "system", 00:18:30.170 "dma_device_type": 1 00:18:30.170 }, 00:18:30.170 { 00:18:30.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.170 "dma_device_type": 2 00:18:30.170 } 00:18:30.170 ], 00:18:30.170 "driver_specific": {} 00:18:30.170 }' 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.170 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:30.429 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.430 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.430 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:30.430 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.430 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:30.430 18:45:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:30.688 "name": "BaseBdev2", 00:18:30.688 "aliases": [ 00:18:30.688 "06d15394-74bb-4391-8d2e-ab0e6607aa18" 00:18:30.688 ], 00:18:30.688 "product_name": "Malloc disk", 00:18:30.688 "block_size": 512, 00:18:30.688 "num_blocks": 65536, 00:18:30.688 "uuid": "06d15394-74bb-4391-8d2e-ab0e6607aa18", 00:18:30.688 "assigned_rate_limits": { 00:18:30.688 "rw_ios_per_sec": 0, 00:18:30.688 "rw_mbytes_per_sec": 0, 00:18:30.688 "r_mbytes_per_sec": 0, 00:18:30.688 "w_mbytes_per_sec": 0 00:18:30.688 }, 00:18:30.688 "claimed": true, 00:18:30.688 "claim_type": "exclusive_write", 00:18:30.688 "zoned": false, 00:18:30.688 "supported_io_types": { 00:18:30.688 "read": true, 00:18:30.688 "write": true, 00:18:30.688 "unmap": true, 00:18:30.688 "flush": true, 00:18:30.688 "reset": true, 00:18:30.688 "nvme_admin": false, 00:18:30.688 "nvme_io": false, 00:18:30.688 "nvme_io_md": false, 00:18:30.688 "write_zeroes": true, 00:18:30.688 "zcopy": true, 00:18:30.688 "get_zone_info": false, 00:18:30.688 "zone_management": false, 00:18:30.688 "zone_append": false, 00:18:30.688 "compare": false, 00:18:30.688 "compare_and_write": false, 00:18:30.688 "abort": true, 00:18:30.688 "seek_hole": false, 00:18:30.688 "seek_data": false, 00:18:30.688 "copy": true, 00:18:30.688 "nvme_iov_md": false 00:18:30.688 }, 00:18:30.688 "memory_domains": [ 00:18:30.688 { 00:18:30.688 "dma_device_id": "system", 00:18:30.688 "dma_device_type": 1 00:18:30.688 }, 00:18:30.688 { 00:18:30.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.688 "dma_device_type": 2 00:18:30.688 } 00:18:30.688 ], 00:18:30.688 "driver_specific": {} 00:18:30.688 }' 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.688 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.947 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:30.947 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.947 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.947 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:30.947 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.947 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:30.947 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.206 "name": "BaseBdev3", 00:18:31.206 "aliases": [ 00:18:31.206 "023ac0b2-2037-4aa3-b973-7610cbd8186c" 00:18:31.206 ], 00:18:31.206 "product_name": "Malloc disk", 00:18:31.206 "block_size": 512, 00:18:31.206 "num_blocks": 65536, 00:18:31.206 "uuid": "023ac0b2-2037-4aa3-b973-7610cbd8186c", 00:18:31.206 "assigned_rate_limits": { 00:18:31.206 "rw_ios_per_sec": 0, 00:18:31.206 "rw_mbytes_per_sec": 0, 00:18:31.206 "r_mbytes_per_sec": 0, 00:18:31.206 "w_mbytes_per_sec": 0 00:18:31.206 }, 00:18:31.206 "claimed": true, 00:18:31.206 "claim_type": "exclusive_write", 00:18:31.206 "zoned": false, 00:18:31.206 "supported_io_types": { 00:18:31.206 "read": true, 00:18:31.206 "write": true, 00:18:31.206 "unmap": true, 00:18:31.206 "flush": true, 00:18:31.206 "reset": true, 00:18:31.206 "nvme_admin": false, 00:18:31.206 "nvme_io": false, 00:18:31.206 "nvme_io_md": false, 00:18:31.206 "write_zeroes": true, 00:18:31.206 "zcopy": true, 00:18:31.206 "get_zone_info": false, 00:18:31.206 "zone_management": false, 00:18:31.206 "zone_append": false, 00:18:31.206 "compare": false, 00:18:31.206 "compare_and_write": false, 00:18:31.206 "abort": true, 00:18:31.206 "seek_hole": false, 00:18:31.206 "seek_data": false, 00:18:31.206 "copy": true, 00:18:31.206 "nvme_iov_md": false 00:18:31.206 }, 00:18:31.206 "memory_domains": [ 00:18:31.206 { 00:18:31.206 "dma_device_id": "system", 00:18:31.206 "dma_device_type": 1 00:18:31.206 }, 00:18:31.206 { 00:18:31.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.206 "dma_device_type": 2 00:18:31.206 } 00:18:31.206 ], 00:18:31.206 "driver_specific": {} 00:18:31.206 }' 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.206 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.465 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.465 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.465 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.465 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:31.465 18:45:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:31.723 [2024-07-25 18:45:32.178779] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.724 [2024-07-25 18:45:32.178812] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.724 [2024-07-25 18:45:32.178892] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.724 [2024-07-25 18:45:32.178969] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.724 [2024-07-25 18:45:32.178978] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 125703 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 125703 ']' 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 125703 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 125703 00:18:31.724 killing process with pid 125703 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 125703' 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 125703 00:18:31.724 [2024-07-25 18:45:32.222769] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:31.724 18:45:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 125703 00:18:31.982 [2024-07-25 18:45:32.476814] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.388 ************************************ 00:18:33.388 END TEST raid_state_function_test_sb 00:18:33.388 ************************************ 00:18:33.388 18:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:33.388 00:18:33.388 real 0m28.521s 00:18:33.388 user 0m51.043s 00:18:33.388 sys 0m4.845s 00:18:33.388 18:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.388 18:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.388 18:45:33 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:18:33.388 18:45:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:33.388 18:45:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:33.388 18:45:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.388 ************************************ 00:18:33.388 START TEST raid_superblock_test 00:18:33.388 ************************************ 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=126674 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 126674 /var/tmp/spdk-raid.sock 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 126674 ']' 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:33.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.388 18:45:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.388 [2024-07-25 18:45:33.834044] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:33.388 [2024-07-25 18:45:33.834279] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126674 ] 00:18:33.658 [2024-07-25 18:45:34.022845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.916 [2024-07-25 18:45:34.253405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.916 [2024-07-25 18:45:34.439302] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.174 18:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.174 18:45:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:34.174 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:18:34.174 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:34.174 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:18:34.174 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:18:34.175 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:34.175 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.175 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.175 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.175 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:34.433 malloc1 00:18:34.433 18:45:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.691 [2024-07-25 18:45:35.159029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.691 [2024-07-25 18:45:35.159145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.691 [2024-07-25 18:45:35.159185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:34.691 [2024-07-25 18:45:35.159207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.691 [2024-07-25 18:45:35.161813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.691 [2024-07-25 18:45:35.161862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.691 pt1 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.691 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:34.948 malloc2 00:18:34.949 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.207 [2024-07-25 18:45:35.618482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.207 [2024-07-25 18:45:35.618603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.207 [2024-07-25 18:45:35.618641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:35.207 [2024-07-25 18:45:35.618663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.207 [2024-07-25 18:45:35.621248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.207 [2024-07-25 18:45:35.621298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.207 pt2 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.207 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:35.465 malloc3 00:18:35.465 18:45:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:35.465 [2024-07-25 18:45:36.017084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:35.465 [2024-07-25 18:45:36.017193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.465 [2024-07-25 18:45:36.017230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:35.465 [2024-07-25 18:45:36.017257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.465 [2024-07-25 18:45:36.019829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.465 [2024-07-25 18:45:36.019882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:35.465 pt3 00:18:35.465 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:35.465 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:35.465 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:35.723 [2024-07-25 18:45:36.193163] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:35.723 [2024-07-25 18:45:36.195335] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.723 [2024-07-25 18:45:36.195405] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:35.723 [2024-07-25 18:45:36.195556] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:18:35.723 [2024-07-25 18:45:36.195565] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:35.723 [2024-07-25 18:45:36.195692] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:35.723 [2024-07-25 18:45:36.196034] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:18:35.723 [2024-07-25 18:45:36.196045] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:18:35.723 [2024-07-25 18:45:36.196204] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.723 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.980 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.980 "name": "raid_bdev1", 00:18:35.980 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:35.980 "strip_size_kb": 64, 00:18:35.980 "state": "online", 00:18:35.980 "raid_level": "raid0", 00:18:35.980 "superblock": true, 00:18:35.980 "num_base_bdevs": 3, 00:18:35.980 "num_base_bdevs_discovered": 3, 00:18:35.980 "num_base_bdevs_operational": 3, 00:18:35.980 "base_bdevs_list": [ 00:18:35.980 { 00:18:35.980 "name": "pt1", 00:18:35.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:35.980 "is_configured": true, 00:18:35.980 "data_offset": 2048, 00:18:35.980 "data_size": 63488 00:18:35.980 }, 00:18:35.980 { 00:18:35.980 "name": "pt2", 00:18:35.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:35.980 "is_configured": true, 00:18:35.980 "data_offset": 2048, 00:18:35.980 "data_size": 63488 00:18:35.980 }, 00:18:35.980 { 00:18:35.980 "name": "pt3", 00:18:35.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:35.980 "is_configured": true, 00:18:35.980 "data_offset": 2048, 00:18:35.980 "data_size": 63488 00:18:35.980 } 00:18:35.980 ] 00:18:35.980 }' 00:18:35.980 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.980 18:45:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:36.546 18:45:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:36.804 [2024-07-25 18:45:37.205521] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.804 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:36.804 "name": "raid_bdev1", 00:18:36.804 "aliases": [ 00:18:36.804 "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e" 00:18:36.804 ], 00:18:36.804 "product_name": "Raid Volume", 00:18:36.804 "block_size": 512, 00:18:36.804 "num_blocks": 190464, 00:18:36.804 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:36.804 "assigned_rate_limits": { 00:18:36.804 "rw_ios_per_sec": 0, 00:18:36.804 "rw_mbytes_per_sec": 0, 00:18:36.804 "r_mbytes_per_sec": 0, 00:18:36.804 "w_mbytes_per_sec": 0 00:18:36.805 }, 00:18:36.805 "claimed": false, 00:18:36.805 "zoned": false, 00:18:36.805 "supported_io_types": { 00:18:36.805 "read": true, 00:18:36.805 "write": true, 00:18:36.805 "unmap": true, 00:18:36.805 "flush": true, 00:18:36.805 "reset": true, 00:18:36.805 "nvme_admin": false, 00:18:36.805 "nvme_io": false, 00:18:36.805 "nvme_io_md": false, 00:18:36.805 "write_zeroes": true, 00:18:36.805 "zcopy": false, 00:18:36.805 "get_zone_info": false, 00:18:36.805 "zone_management": false, 00:18:36.805 "zone_append": false, 00:18:36.805 "compare": false, 00:18:36.805 "compare_and_write": false, 00:18:36.805 "abort": false, 00:18:36.805 "seek_hole": false, 00:18:36.805 "seek_data": false, 00:18:36.805 "copy": false, 00:18:36.805 "nvme_iov_md": false 00:18:36.805 }, 00:18:36.805 "memory_domains": [ 00:18:36.805 { 00:18:36.805 "dma_device_id": "system", 00:18:36.805 "dma_device_type": 1 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.805 "dma_device_type": 2 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "dma_device_id": "system", 00:18:36.805 "dma_device_type": 1 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.805 "dma_device_type": 2 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "dma_device_id": "system", 00:18:36.805 "dma_device_type": 1 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.805 "dma_device_type": 2 00:18:36.805 } 00:18:36.805 ], 00:18:36.805 "driver_specific": { 00:18:36.805 "raid": { 00:18:36.805 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:36.805 "strip_size_kb": 64, 00:18:36.805 "state": "online", 00:18:36.805 "raid_level": "raid0", 00:18:36.805 "superblock": true, 00:18:36.805 "num_base_bdevs": 3, 00:18:36.805 "num_base_bdevs_discovered": 3, 00:18:36.805 "num_base_bdevs_operational": 3, 00:18:36.805 "base_bdevs_list": [ 00:18:36.805 { 00:18:36.805 "name": "pt1", 00:18:36.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.805 "is_configured": true, 00:18:36.805 "data_offset": 2048, 00:18:36.805 "data_size": 63488 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "name": "pt2", 00:18:36.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.805 "is_configured": true, 00:18:36.805 "data_offset": 2048, 00:18:36.805 "data_size": 63488 00:18:36.805 }, 00:18:36.805 { 00:18:36.805 "name": "pt3", 00:18:36.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.805 "is_configured": true, 00:18:36.805 "data_offset": 2048, 00:18:36.805 "data_size": 63488 00:18:36.805 } 00:18:36.805 ] 00:18:36.805 } 00:18:36.805 } 00:18:36.805 }' 00:18:36.805 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.805 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:36.805 pt2 00:18:36.805 pt3' 00:18:36.805 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:36.805 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:36.805 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.064 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.064 "name": "pt1", 00:18:37.064 "aliases": [ 00:18:37.064 "00000000-0000-0000-0000-000000000001" 00:18:37.064 ], 00:18:37.064 "product_name": "passthru", 00:18:37.064 "block_size": 512, 00:18:37.064 "num_blocks": 65536, 00:18:37.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.064 "assigned_rate_limits": { 00:18:37.064 "rw_ios_per_sec": 0, 00:18:37.064 "rw_mbytes_per_sec": 0, 00:18:37.064 "r_mbytes_per_sec": 0, 00:18:37.064 "w_mbytes_per_sec": 0 00:18:37.064 }, 00:18:37.064 "claimed": true, 00:18:37.064 "claim_type": "exclusive_write", 00:18:37.064 "zoned": false, 00:18:37.064 "supported_io_types": { 00:18:37.064 "read": true, 00:18:37.064 "write": true, 00:18:37.064 "unmap": true, 00:18:37.064 "flush": true, 00:18:37.064 "reset": true, 00:18:37.064 "nvme_admin": false, 00:18:37.064 "nvme_io": false, 00:18:37.064 "nvme_io_md": false, 00:18:37.064 "write_zeroes": true, 00:18:37.064 "zcopy": true, 00:18:37.064 "get_zone_info": false, 00:18:37.064 "zone_management": false, 00:18:37.064 "zone_append": false, 00:18:37.064 "compare": false, 00:18:37.064 "compare_and_write": false, 00:18:37.064 "abort": true, 00:18:37.064 "seek_hole": false, 00:18:37.064 "seek_data": false, 00:18:37.064 "copy": true, 00:18:37.064 "nvme_iov_md": false 00:18:37.064 }, 00:18:37.064 "memory_domains": [ 00:18:37.064 { 00:18:37.064 "dma_device_id": "system", 00:18:37.064 "dma_device_type": 1 00:18:37.064 }, 00:18:37.064 { 00:18:37.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.064 "dma_device_type": 2 00:18:37.064 } 00:18:37.064 ], 00:18:37.064 "driver_specific": { 00:18:37.064 "passthru": { 00:18:37.064 "name": "pt1", 00:18:37.064 "base_bdev_name": "malloc1" 00:18:37.064 } 00:18:37.064 } 00:18:37.064 }' 00:18:37.064 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.064 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.064 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.064 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.064 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.323 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:37.582 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.582 "name": "pt2", 00:18:37.582 "aliases": [ 00:18:37.582 "00000000-0000-0000-0000-000000000002" 00:18:37.582 ], 00:18:37.582 "product_name": "passthru", 00:18:37.582 "block_size": 512, 00:18:37.582 "num_blocks": 65536, 00:18:37.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.582 "assigned_rate_limits": { 00:18:37.582 "rw_ios_per_sec": 0, 00:18:37.582 "rw_mbytes_per_sec": 0, 00:18:37.582 "r_mbytes_per_sec": 0, 00:18:37.582 "w_mbytes_per_sec": 0 00:18:37.582 }, 00:18:37.582 "claimed": true, 00:18:37.582 "claim_type": "exclusive_write", 00:18:37.582 "zoned": false, 00:18:37.582 "supported_io_types": { 00:18:37.582 "read": true, 00:18:37.582 "write": true, 00:18:37.582 "unmap": true, 00:18:37.582 "flush": true, 00:18:37.582 "reset": true, 00:18:37.582 "nvme_admin": false, 00:18:37.582 "nvme_io": false, 00:18:37.582 "nvme_io_md": false, 00:18:37.582 "write_zeroes": true, 00:18:37.582 "zcopy": true, 00:18:37.582 "get_zone_info": false, 00:18:37.582 "zone_management": false, 00:18:37.582 "zone_append": false, 00:18:37.582 "compare": false, 00:18:37.582 "compare_and_write": false, 00:18:37.582 "abort": true, 00:18:37.582 "seek_hole": false, 00:18:37.582 "seek_data": false, 00:18:37.582 "copy": true, 00:18:37.582 "nvme_iov_md": false 00:18:37.582 }, 00:18:37.582 "memory_domains": [ 00:18:37.582 { 00:18:37.582 "dma_device_id": "system", 00:18:37.582 "dma_device_type": 1 00:18:37.582 }, 00:18:37.582 { 00:18:37.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.582 "dma_device_type": 2 00:18:37.582 } 00:18:37.582 ], 00:18:37.582 "driver_specific": { 00:18:37.582 "passthru": { 00:18:37.582 "name": "pt2", 00:18:37.582 "base_bdev_name": "malloc2" 00:18:37.582 } 00:18:37.582 } 00:18:37.582 }' 00:18:37.582 18:45:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.582 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.582 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.582 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.582 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.582 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.582 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:37.841 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:38.099 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:38.099 "name": "pt3", 00:18:38.099 "aliases": [ 00:18:38.099 "00000000-0000-0000-0000-000000000003" 00:18:38.099 ], 00:18:38.099 "product_name": "passthru", 00:18:38.099 "block_size": 512, 00:18:38.099 "num_blocks": 65536, 00:18:38.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.099 "assigned_rate_limits": { 00:18:38.099 "rw_ios_per_sec": 0, 00:18:38.099 "rw_mbytes_per_sec": 0, 00:18:38.100 "r_mbytes_per_sec": 0, 00:18:38.100 "w_mbytes_per_sec": 0 00:18:38.100 }, 00:18:38.100 "claimed": true, 00:18:38.100 "claim_type": "exclusive_write", 00:18:38.100 "zoned": false, 00:18:38.100 "supported_io_types": { 00:18:38.100 "read": true, 00:18:38.100 "write": true, 00:18:38.100 "unmap": true, 00:18:38.100 "flush": true, 00:18:38.100 "reset": true, 00:18:38.100 "nvme_admin": false, 00:18:38.100 "nvme_io": false, 00:18:38.100 "nvme_io_md": false, 00:18:38.100 "write_zeroes": true, 00:18:38.100 "zcopy": true, 00:18:38.100 "get_zone_info": false, 00:18:38.100 "zone_management": false, 00:18:38.100 "zone_append": false, 00:18:38.100 "compare": false, 00:18:38.100 "compare_and_write": false, 00:18:38.100 "abort": true, 00:18:38.100 "seek_hole": false, 00:18:38.100 "seek_data": false, 00:18:38.100 "copy": true, 00:18:38.100 "nvme_iov_md": false 00:18:38.100 }, 00:18:38.100 "memory_domains": [ 00:18:38.100 { 00:18:38.100 "dma_device_id": "system", 00:18:38.100 "dma_device_type": 1 00:18:38.100 }, 00:18:38.100 { 00:18:38.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.100 "dma_device_type": 2 00:18:38.100 } 00:18:38.100 ], 00:18:38.100 "driver_specific": { 00:18:38.100 "passthru": { 00:18:38.100 "name": "pt3", 00:18:38.100 "base_bdev_name": "malloc3" 00:18:38.100 } 00:18:38.100 } 00:18:38.100 }' 00:18:38.100 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.359 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.617 18:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.618 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.618 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:18:38.618 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:38.618 [2024-07-25 18:45:39.189852] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.877 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=2de6fcfa-bfc4-43ad-805a-1cec6b002d6e 00:18:38.877 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 2de6fcfa-bfc4-43ad-805a-1cec6b002d6e ']' 00:18:38.877 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.877 [2024-07-25 18:45:39.373593] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.877 [2024-07-25 18:45:39.373619] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.877 [2024-07-25 18:45:39.373735] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.877 [2024-07-25 18:45:39.373838] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.877 [2024-07-25 18:45:39.373850] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:18:38.877 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.877 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:18:39.135 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:18:39.135 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:18:39.135 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.135 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:39.393 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.393 18:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:39.652 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.652 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:39.911 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:39.911 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.911 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:18:39.911 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:39.911 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:39.911 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:39.911 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:40.170 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:40.170 [2024-07-25 18:45:40.737780] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:40.170 [2024-07-25 18:45:40.740044] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:40.170 [2024-07-25 18:45:40.740124] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:40.170 [2024-07-25 18:45:40.740177] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:40.170 [2024-07-25 18:45:40.740264] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:40.170 [2024-07-25 18:45:40.740295] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:40.170 [2024-07-25 18:45:40.740325] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.170 [2024-07-25 18:45:40.740334] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:18:40.170 request: 00:18:40.170 { 00:18:40.170 "name": "raid_bdev1", 00:18:40.170 "raid_level": "raid0", 00:18:40.170 "base_bdevs": [ 00:18:40.170 "malloc1", 00:18:40.170 "malloc2", 00:18:40.170 "malloc3" 00:18:40.170 ], 00:18:40.170 "strip_size_kb": 64, 00:18:40.170 "superblock": false, 00:18:40.170 "method": "bdev_raid_create", 00:18:40.170 "req_id": 1 00:18:40.170 } 00:18:40.170 Got JSON-RPC error response 00:18:40.170 response: 00:18:40.170 { 00:18:40.170 "code": -17, 00:18:40.170 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:40.170 } 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:18:40.429 18:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.688 [2024-07-25 18:45:41.149744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.688 [2024-07-25 18:45:41.149860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.688 [2024-07-25 18:45:41.149898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:40.688 [2024-07-25 18:45:41.149919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.688 [2024-07-25 18:45:41.152544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.688 [2024-07-25 18:45:41.152592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.688 [2024-07-25 18:45:41.152726] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:40.688 [2024-07-25 18:45:41.152784] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.688 pt1 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.688 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.947 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.947 "name": "raid_bdev1", 00:18:40.947 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:40.947 "strip_size_kb": 64, 00:18:40.947 "state": "configuring", 00:18:40.947 "raid_level": "raid0", 00:18:40.947 "superblock": true, 00:18:40.947 "num_base_bdevs": 3, 00:18:40.947 "num_base_bdevs_discovered": 1, 00:18:40.947 "num_base_bdevs_operational": 3, 00:18:40.947 "base_bdevs_list": [ 00:18:40.947 { 00:18:40.947 "name": "pt1", 00:18:40.947 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:40.947 "is_configured": true, 00:18:40.947 "data_offset": 2048, 00:18:40.947 "data_size": 63488 00:18:40.947 }, 00:18:40.947 { 00:18:40.947 "name": null, 00:18:40.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.947 "is_configured": false, 00:18:40.947 "data_offset": 2048, 00:18:40.947 "data_size": 63488 00:18:40.947 }, 00:18:40.947 { 00:18:40.947 "name": null, 00:18:40.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.947 "is_configured": false, 00:18:40.947 "data_offset": 2048, 00:18:40.947 "data_size": 63488 00:18:40.947 } 00:18:40.947 ] 00:18:40.947 }' 00:18:40.947 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.947 18:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.514 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:18:41.514 18:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.773 [2024-07-25 18:45:42.129956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.773 [2024-07-25 18:45:42.130058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.773 [2024-07-25 18:45:42.130097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:41.773 [2024-07-25 18:45:42.130120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.773 [2024-07-25 18:45:42.130657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.773 [2024-07-25 18:45:42.130698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.773 [2024-07-25 18:45:42.130810] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:41.773 [2024-07-25 18:45:42.130839] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.773 pt2 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:41.773 [2024-07-25 18:45:42.309978] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:41.773 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.774 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.774 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.774 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.774 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.774 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.033 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.033 "name": "raid_bdev1", 00:18:42.033 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:42.033 "strip_size_kb": 64, 00:18:42.033 "state": "configuring", 00:18:42.033 "raid_level": "raid0", 00:18:42.033 "superblock": true, 00:18:42.033 "num_base_bdevs": 3, 00:18:42.033 "num_base_bdevs_discovered": 1, 00:18:42.033 "num_base_bdevs_operational": 3, 00:18:42.033 "base_bdevs_list": [ 00:18:42.033 { 00:18:42.033 "name": "pt1", 00:18:42.033 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:42.033 "is_configured": true, 00:18:42.033 "data_offset": 2048, 00:18:42.033 "data_size": 63488 00:18:42.033 }, 00:18:42.033 { 00:18:42.033 "name": null, 00:18:42.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.033 "is_configured": false, 00:18:42.033 "data_offset": 2048, 00:18:42.033 "data_size": 63488 00:18:42.033 }, 00:18:42.033 { 00:18:42.033 "name": null, 00:18:42.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.033 "is_configured": false, 00:18:42.033 "data_offset": 2048, 00:18:42.033 "data_size": 63488 00:18:42.033 } 00:18:42.033 ] 00:18:42.033 }' 00:18:42.033 18:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.033 18:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.599 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:18:42.599 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:42.599 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.858 [2024-07-25 18:45:43.326144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.858 [2024-07-25 18:45:43.326247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.858 [2024-07-25 18:45:43.326284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:42.858 [2024-07-25 18:45:43.326312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.858 [2024-07-25 18:45:43.326853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.858 [2024-07-25 18:45:43.326898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.858 [2024-07-25 18:45:43.327026] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:42.858 [2024-07-25 18:45:43.327050] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.858 pt2 00:18:42.858 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:42.858 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:42.858 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:43.117 [2024-07-25 18:45:43.510144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:43.117 [2024-07-25 18:45:43.510202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.117 [2024-07-25 18:45:43.510228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:43.117 [2024-07-25 18:45:43.510253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.117 [2024-07-25 18:45:43.510690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.117 [2024-07-25 18:45:43.510725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:43.117 [2024-07-25 18:45:43.510829] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:43.117 [2024-07-25 18:45:43.510846] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:43.117 [2024-07-25 18:45:43.510941] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:18:43.117 [2024-07-25 18:45:43.510949] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:43.117 [2024-07-25 18:45:43.511026] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:43.117 [2024-07-25 18:45:43.511326] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:18:43.117 [2024-07-25 18:45:43.511345] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:18:43.117 [2024-07-25 18:45:43.511474] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.117 pt3 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.117 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.375 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.375 "name": "raid_bdev1", 00:18:43.375 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:43.375 "strip_size_kb": 64, 00:18:43.375 "state": "online", 00:18:43.375 "raid_level": "raid0", 00:18:43.375 "superblock": true, 00:18:43.375 "num_base_bdevs": 3, 00:18:43.375 "num_base_bdevs_discovered": 3, 00:18:43.375 "num_base_bdevs_operational": 3, 00:18:43.375 "base_bdevs_list": [ 00:18:43.375 { 00:18:43.375 "name": "pt1", 00:18:43.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:43.375 "is_configured": true, 00:18:43.375 "data_offset": 2048, 00:18:43.375 "data_size": 63488 00:18:43.375 }, 00:18:43.375 { 00:18:43.375 "name": "pt2", 00:18:43.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.375 "is_configured": true, 00:18:43.375 "data_offset": 2048, 00:18:43.375 "data_size": 63488 00:18:43.375 }, 00:18:43.375 { 00:18:43.375 "name": "pt3", 00:18:43.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:43.375 "is_configured": true, 00:18:43.375 "data_offset": 2048, 00:18:43.375 "data_size": 63488 00:18:43.375 } 00:18:43.375 ] 00:18:43.375 }' 00:18:43.375 18:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.375 18:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:43.942 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:44.200 [2024-07-25 18:45:44.534766] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.200 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:44.200 "name": "raid_bdev1", 00:18:44.200 "aliases": [ 00:18:44.200 "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e" 00:18:44.200 ], 00:18:44.200 "product_name": "Raid Volume", 00:18:44.200 "block_size": 512, 00:18:44.200 "num_blocks": 190464, 00:18:44.200 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:44.200 "assigned_rate_limits": { 00:18:44.200 "rw_ios_per_sec": 0, 00:18:44.200 "rw_mbytes_per_sec": 0, 00:18:44.200 "r_mbytes_per_sec": 0, 00:18:44.200 "w_mbytes_per_sec": 0 00:18:44.200 }, 00:18:44.200 "claimed": false, 00:18:44.200 "zoned": false, 00:18:44.200 "supported_io_types": { 00:18:44.200 "read": true, 00:18:44.200 "write": true, 00:18:44.200 "unmap": true, 00:18:44.200 "flush": true, 00:18:44.200 "reset": true, 00:18:44.200 "nvme_admin": false, 00:18:44.200 "nvme_io": false, 00:18:44.200 "nvme_io_md": false, 00:18:44.200 "write_zeroes": true, 00:18:44.200 "zcopy": false, 00:18:44.200 "get_zone_info": false, 00:18:44.200 "zone_management": false, 00:18:44.200 "zone_append": false, 00:18:44.200 "compare": false, 00:18:44.200 "compare_and_write": false, 00:18:44.200 "abort": false, 00:18:44.200 "seek_hole": false, 00:18:44.200 "seek_data": false, 00:18:44.200 "copy": false, 00:18:44.200 "nvme_iov_md": false 00:18:44.200 }, 00:18:44.200 "memory_domains": [ 00:18:44.200 { 00:18:44.201 "dma_device_id": "system", 00:18:44.201 "dma_device_type": 1 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.201 "dma_device_type": 2 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "dma_device_id": "system", 00:18:44.201 "dma_device_type": 1 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.201 "dma_device_type": 2 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "dma_device_id": "system", 00:18:44.201 "dma_device_type": 1 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.201 "dma_device_type": 2 00:18:44.201 } 00:18:44.201 ], 00:18:44.201 "driver_specific": { 00:18:44.201 "raid": { 00:18:44.201 "uuid": "2de6fcfa-bfc4-43ad-805a-1cec6b002d6e", 00:18:44.201 "strip_size_kb": 64, 00:18:44.201 "state": "online", 00:18:44.201 "raid_level": "raid0", 00:18:44.201 "superblock": true, 00:18:44.201 "num_base_bdevs": 3, 00:18:44.201 "num_base_bdevs_discovered": 3, 00:18:44.201 "num_base_bdevs_operational": 3, 00:18:44.201 "base_bdevs_list": [ 00:18:44.201 { 00:18:44.201 "name": "pt1", 00:18:44.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.201 "is_configured": true, 00:18:44.201 "data_offset": 2048, 00:18:44.201 "data_size": 63488 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "name": "pt2", 00:18:44.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.201 "is_configured": true, 00:18:44.201 "data_offset": 2048, 00:18:44.201 "data_size": 63488 00:18:44.201 }, 00:18:44.201 { 00:18:44.201 "name": "pt3", 00:18:44.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:44.201 "is_configured": true, 00:18:44.201 "data_offset": 2048, 00:18:44.201 "data_size": 63488 00:18:44.201 } 00:18:44.201 ] 00:18:44.201 } 00:18:44.201 } 00:18:44.201 }' 00:18:44.201 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:44.201 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:44.201 pt2 00:18:44.201 pt3' 00:18:44.201 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.201 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.201 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:44.459 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.459 "name": "pt1", 00:18:44.459 "aliases": [ 00:18:44.459 "00000000-0000-0000-0000-000000000001" 00:18:44.459 ], 00:18:44.459 "product_name": "passthru", 00:18:44.459 "block_size": 512, 00:18:44.459 "num_blocks": 65536, 00:18:44.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.459 "assigned_rate_limits": { 00:18:44.459 "rw_ios_per_sec": 0, 00:18:44.459 "rw_mbytes_per_sec": 0, 00:18:44.459 "r_mbytes_per_sec": 0, 00:18:44.459 "w_mbytes_per_sec": 0 00:18:44.459 }, 00:18:44.459 "claimed": true, 00:18:44.459 "claim_type": "exclusive_write", 00:18:44.459 "zoned": false, 00:18:44.459 "supported_io_types": { 00:18:44.459 "read": true, 00:18:44.459 "write": true, 00:18:44.459 "unmap": true, 00:18:44.459 "flush": true, 00:18:44.459 "reset": true, 00:18:44.459 "nvme_admin": false, 00:18:44.459 "nvme_io": false, 00:18:44.459 "nvme_io_md": false, 00:18:44.459 "write_zeroes": true, 00:18:44.459 "zcopy": true, 00:18:44.459 "get_zone_info": false, 00:18:44.459 "zone_management": false, 00:18:44.459 "zone_append": false, 00:18:44.459 "compare": false, 00:18:44.459 "compare_and_write": false, 00:18:44.459 "abort": true, 00:18:44.459 "seek_hole": false, 00:18:44.459 "seek_data": false, 00:18:44.459 "copy": true, 00:18:44.459 "nvme_iov_md": false 00:18:44.459 }, 00:18:44.459 "memory_domains": [ 00:18:44.459 { 00:18:44.459 "dma_device_id": "system", 00:18:44.459 "dma_device_type": 1 00:18:44.459 }, 00:18:44.459 { 00:18:44.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.459 "dma_device_type": 2 00:18:44.459 } 00:18:44.459 ], 00:18:44.459 "driver_specific": { 00:18:44.459 "passthru": { 00:18:44.459 "name": "pt1", 00:18:44.459 "base_bdev_name": "malloc1" 00:18:44.459 } 00:18:44.459 } 00:18:44.460 }' 00:18:44.460 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.460 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.460 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.460 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.460 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.460 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.460 18:45:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.460 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.718 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.718 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.718 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:44.718 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:44.718 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.718 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:44.718 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.976 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.976 "name": "pt2", 00:18:44.976 "aliases": [ 00:18:44.977 "00000000-0000-0000-0000-000000000002" 00:18:44.977 ], 00:18:44.977 "product_name": "passthru", 00:18:44.977 "block_size": 512, 00:18:44.977 "num_blocks": 65536, 00:18:44.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.977 "assigned_rate_limits": { 00:18:44.977 "rw_ios_per_sec": 0, 00:18:44.977 "rw_mbytes_per_sec": 0, 00:18:44.977 "r_mbytes_per_sec": 0, 00:18:44.977 "w_mbytes_per_sec": 0 00:18:44.977 }, 00:18:44.977 "claimed": true, 00:18:44.977 "claim_type": "exclusive_write", 00:18:44.977 "zoned": false, 00:18:44.977 "supported_io_types": { 00:18:44.977 "read": true, 00:18:44.977 "write": true, 00:18:44.977 "unmap": true, 00:18:44.977 "flush": true, 00:18:44.977 "reset": true, 00:18:44.977 "nvme_admin": false, 00:18:44.977 "nvme_io": false, 00:18:44.977 "nvme_io_md": false, 00:18:44.977 "write_zeroes": true, 00:18:44.977 "zcopy": true, 00:18:44.977 "get_zone_info": false, 00:18:44.977 "zone_management": false, 00:18:44.977 "zone_append": false, 00:18:44.977 "compare": false, 00:18:44.977 "compare_and_write": false, 00:18:44.977 "abort": true, 00:18:44.977 "seek_hole": false, 00:18:44.977 "seek_data": false, 00:18:44.977 "copy": true, 00:18:44.977 "nvme_iov_md": false 00:18:44.977 }, 00:18:44.977 "memory_domains": [ 00:18:44.977 { 00:18:44.977 "dma_device_id": "system", 00:18:44.977 "dma_device_type": 1 00:18:44.977 }, 00:18:44.977 { 00:18:44.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.977 "dma_device_type": 2 00:18:44.977 } 00:18:44.977 ], 00:18:44.977 "driver_specific": { 00:18:44.977 "passthru": { 00:18:44.977 "name": "pt2", 00:18:44.977 "base_bdev_name": "malloc2" 00:18:44.977 } 00:18:44.977 } 00:18:44.977 }' 00:18:44.977 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.977 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.977 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.977 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.977 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:45.235 18:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.494 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.494 "name": "pt3", 00:18:45.494 "aliases": [ 00:18:45.494 "00000000-0000-0000-0000-000000000003" 00:18:45.494 ], 00:18:45.494 "product_name": "passthru", 00:18:45.494 "block_size": 512, 00:18:45.494 "num_blocks": 65536, 00:18:45.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:45.494 "assigned_rate_limits": { 00:18:45.494 "rw_ios_per_sec": 0, 00:18:45.494 "rw_mbytes_per_sec": 0, 00:18:45.494 "r_mbytes_per_sec": 0, 00:18:45.494 "w_mbytes_per_sec": 0 00:18:45.494 }, 00:18:45.494 "claimed": true, 00:18:45.494 "claim_type": "exclusive_write", 00:18:45.494 "zoned": false, 00:18:45.494 "supported_io_types": { 00:18:45.494 "read": true, 00:18:45.494 "write": true, 00:18:45.494 "unmap": true, 00:18:45.494 "flush": true, 00:18:45.494 "reset": true, 00:18:45.494 "nvme_admin": false, 00:18:45.494 "nvme_io": false, 00:18:45.494 "nvme_io_md": false, 00:18:45.494 "write_zeroes": true, 00:18:45.494 "zcopy": true, 00:18:45.494 "get_zone_info": false, 00:18:45.494 "zone_management": false, 00:18:45.494 "zone_append": false, 00:18:45.494 "compare": false, 00:18:45.494 "compare_and_write": false, 00:18:45.494 "abort": true, 00:18:45.494 "seek_hole": false, 00:18:45.494 "seek_data": false, 00:18:45.494 "copy": true, 00:18:45.494 "nvme_iov_md": false 00:18:45.494 }, 00:18:45.494 "memory_domains": [ 00:18:45.494 { 00:18:45.494 "dma_device_id": "system", 00:18:45.494 "dma_device_type": 1 00:18:45.494 }, 00:18:45.494 { 00:18:45.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.494 "dma_device_type": 2 00:18:45.494 } 00:18:45.494 ], 00:18:45.494 "driver_specific": { 00:18:45.494 "passthru": { 00:18:45.494 "name": "pt3", 00:18:45.494 "base_bdev_name": "malloc3" 00:18:45.494 } 00:18:45.494 } 00:18:45.494 }' 00:18:45.494 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.494 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.753 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.012 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.012 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:46.012 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:18:46.270 [2024-07-25 18:45:46.621145] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 2de6fcfa-bfc4-43ad-805a-1cec6b002d6e '!=' 2de6fcfa-bfc4-43ad-805a-1cec6b002d6e ']' 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 126674 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 126674 ']' 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 126674 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126674 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126674' 00:18:46.270 killing process with pid 126674 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 126674 00:18:46.270 [2024-07-25 18:45:46.676586] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.270 [2024-07-25 18:45:46.676663] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.270 [2024-07-25 18:45:46.676739] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.270 [2024-07-25 18:45:46.676752] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:18:46.270 18:45:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 126674 00:18:46.526 [2024-07-25 18:45:46.931294] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:47.900 ************************************ 00:18:47.900 END TEST raid_superblock_test 00:18:47.900 ************************************ 00:18:47.900 18:45:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:18:47.900 00:18:47.900 real 0m14.368s 00:18:47.900 user 0m24.692s 00:18:47.900 sys 0m2.485s 00:18:47.900 18:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:47.900 18:45:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.900 18:45:48 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:18:47.900 18:45:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:47.900 18:45:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:47.900 18:45:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.900 ************************************ 00:18:47.900 START TEST raid_read_error_test 00:18:47.900 ************************************ 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:47.900 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.siR7MkafP8 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=127148 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 127148 /var/tmp/spdk-raid.sock 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 127148 ']' 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.901 18:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.901 [2024-07-25 18:45:48.270878] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:47.901 [2024-07-25 18:45:48.271049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127148 ] 00:18:47.901 [2024-07-25 18:45:48.435679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.159 [2024-07-25 18:45:48.690835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.418 [2024-07-25 18:45:48.949392] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.679 18:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.679 18:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:48.679 18:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:48.679 18:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:48.986 BaseBdev1_malloc 00:18:48.986 18:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:49.261 true 00:18:49.261 18:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:49.520 [2024-07-25 18:45:49.893941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:49.520 [2024-07-25 18:45:49.894076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.520 [2024-07-25 18:45:49.894117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:49.520 [2024-07-25 18:45:49.894144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.520 [2024-07-25 18:45:49.896887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.520 [2024-07-25 18:45:49.896940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:49.520 BaseBdev1 00:18:49.520 18:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:49.520 18:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:49.778 BaseBdev2_malloc 00:18:49.778 18:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:50.037 true 00:18:50.037 18:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:50.037 [2024-07-25 18:45:50.584432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:50.037 [2024-07-25 18:45:50.584557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.037 [2024-07-25 18:45:50.584599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:50.037 [2024-07-25 18:45:50.584629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.037 [2024-07-25 18:45:50.587831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.037 [2024-07-25 18:45:50.587901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:50.037 BaseBdev2 00:18:50.037 18:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:50.037 18:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:50.295 BaseBdev3_malloc 00:18:50.295 18:45:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:50.554 true 00:18:50.554 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:50.813 [2024-07-25 18:45:51.283800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:50.813 [2024-07-25 18:45:51.283920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.813 [2024-07-25 18:45:51.283962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:50.813 [2024-07-25 18:45:51.283991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.813 [2024-07-25 18:45:51.286768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.813 [2024-07-25 18:45:51.286842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:50.813 BaseBdev3 00:18:50.813 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:51.071 [2024-07-25 18:45:51.463887] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.071 [2024-07-25 18:45:51.466231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.071 [2024-07-25 18:45:51.466330] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:51.071 [2024-07-25 18:45:51.466526] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:18:51.071 [2024-07-25 18:45:51.466535] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:51.071 [2024-07-25 18:45:51.466691] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:51.071 [2024-07-25 18:45:51.467076] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:18:51.071 [2024-07-25 18:45:51.467095] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:18:51.071 [2024-07-25 18:45:51.467276] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.071 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.330 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.330 "name": "raid_bdev1", 00:18:51.330 "uuid": "20c0012d-b7fd-4cdd-9cb5-5214584a8a3d", 00:18:51.330 "strip_size_kb": 64, 00:18:51.330 "state": "online", 00:18:51.330 "raid_level": "raid0", 00:18:51.330 "superblock": true, 00:18:51.330 "num_base_bdevs": 3, 00:18:51.330 "num_base_bdevs_discovered": 3, 00:18:51.330 "num_base_bdevs_operational": 3, 00:18:51.330 "base_bdevs_list": [ 00:18:51.330 { 00:18:51.330 "name": "BaseBdev1", 00:18:51.330 "uuid": "313fd20f-a1b1-5fe9-94f0-5b0011e66724", 00:18:51.330 "is_configured": true, 00:18:51.330 "data_offset": 2048, 00:18:51.330 "data_size": 63488 00:18:51.330 }, 00:18:51.330 { 00:18:51.330 "name": "BaseBdev2", 00:18:51.330 "uuid": "11ab763d-2f3a-54c7-9c61-7b06c78dbb41", 00:18:51.330 "is_configured": true, 00:18:51.330 "data_offset": 2048, 00:18:51.330 "data_size": 63488 00:18:51.330 }, 00:18:51.330 { 00:18:51.330 "name": "BaseBdev3", 00:18:51.330 "uuid": "95ac24f0-70f4-50af-b793-ccc17c7c8036", 00:18:51.330 "is_configured": true, 00:18:51.330 "data_offset": 2048, 00:18:51.330 "data_size": 63488 00:18:51.330 } 00:18:51.330 ] 00:18:51.330 }' 00:18:51.330 18:45:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.330 18:45:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.897 18:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:51.897 18:45:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:51.897 [2024-07-25 18:45:52.345507] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:52.832 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.090 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.349 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.349 "name": "raid_bdev1", 00:18:53.349 "uuid": "20c0012d-b7fd-4cdd-9cb5-5214584a8a3d", 00:18:53.349 "strip_size_kb": 64, 00:18:53.349 "state": "online", 00:18:53.349 "raid_level": "raid0", 00:18:53.349 "superblock": true, 00:18:53.349 "num_base_bdevs": 3, 00:18:53.349 "num_base_bdevs_discovered": 3, 00:18:53.349 "num_base_bdevs_operational": 3, 00:18:53.349 "base_bdevs_list": [ 00:18:53.349 { 00:18:53.349 "name": "BaseBdev1", 00:18:53.349 "uuid": "313fd20f-a1b1-5fe9-94f0-5b0011e66724", 00:18:53.349 "is_configured": true, 00:18:53.349 "data_offset": 2048, 00:18:53.349 "data_size": 63488 00:18:53.349 }, 00:18:53.349 { 00:18:53.349 "name": "BaseBdev2", 00:18:53.349 "uuid": "11ab763d-2f3a-54c7-9c61-7b06c78dbb41", 00:18:53.349 "is_configured": true, 00:18:53.349 "data_offset": 2048, 00:18:53.349 "data_size": 63488 00:18:53.349 }, 00:18:53.349 { 00:18:53.349 "name": "BaseBdev3", 00:18:53.349 "uuid": "95ac24f0-70f4-50af-b793-ccc17c7c8036", 00:18:53.349 "is_configured": true, 00:18:53.349 "data_offset": 2048, 00:18:53.349 "data_size": 63488 00:18:53.349 } 00:18:53.349 ] 00:18:53.349 }' 00:18:53.349 18:45:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.349 18:45:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.916 18:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:53.916 [2024-07-25 18:45:54.486152] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.916 [2024-07-25 18:45:54.486196] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.916 [2024-07-25 18:45:54.488666] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.916 [2024-07-25 18:45:54.488712] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.916 [2024-07-25 18:45:54.488749] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.916 [2024-07-25 18:45:54.488758] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:18:53.916 0 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 127148 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 127148 ']' 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 127148 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127148 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127148' 00:18:54.175 killing process with pid 127148 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 127148 00:18:54.175 18:45:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 127148 00:18:54.175 [2024-07-25 18:45:54.531741] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.434 [2024-07-25 18:45:54.780217] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.siR7MkafP8 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:18:55.812 00:18:55.812 real 0m8.129s 00:18:55.812 user 0m11.655s 00:18:55.812 sys 0m1.261s 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.812 18:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.812 ************************************ 00:18:55.812 END TEST raid_read_error_test 00:18:55.812 ************************************ 00:18:55.812 18:45:56 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:18:55.812 18:45:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:55.812 18:45:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:55.812 18:45:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.071 ************************************ 00:18:56.071 START TEST raid_write_error_test 00:18:56.071 ************************************ 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.dkcLZvAVPW 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=127355 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 127355 /var/tmp/spdk-raid.sock 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 127355 ']' 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.071 18:45:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.071 [2024-07-25 18:45:56.499793] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:56.071 [2024-07-25 18:45:56.500028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127355 ] 00:18:56.330 [2024-07-25 18:45:56.685796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.589 [2024-07-25 18:45:56.945328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.847 [2024-07-25 18:45:57.217759] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.847 18:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.847 18:45:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:56.847 18:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:56.847 18:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:57.106 BaseBdev1_malloc 00:18:57.365 18:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:57.365 true 00:18:57.365 18:45:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:57.626 [2024-07-25 18:45:58.013657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:57.626 [2024-07-25 18:45:58.013797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.626 [2024-07-25 18:45:58.013842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:57.626 [2024-07-25 18:45:58.013865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.626 [2024-07-25 18:45:58.016633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.626 [2024-07-25 18:45:58.016703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:57.626 BaseBdev1 00:18:57.626 18:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:57.626 18:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:57.885 BaseBdev2_malloc 00:18:57.885 18:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:57.885 true 00:18:57.885 18:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:58.144 [2024-07-25 18:45:58.579167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:58.144 [2024-07-25 18:45:58.579290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.144 [2024-07-25 18:45:58.579334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:58.144 [2024-07-25 18:45:58.579356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.144 [2024-07-25 18:45:58.581956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.144 [2024-07-25 18:45:58.582022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:58.144 BaseBdev2 00:18:58.144 18:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:58.144 18:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:58.403 BaseBdev3_malloc 00:18:58.403 18:45:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:58.661 true 00:18:58.661 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:58.661 [2024-07-25 18:45:59.198707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:58.661 [2024-07-25 18:45:59.198811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.661 [2024-07-25 18:45:59.198856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:58.661 [2024-07-25 18:45:59.198885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.661 [2024-07-25 18:45:59.201617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.661 [2024-07-25 18:45:59.201676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:58.661 BaseBdev3 00:18:58.661 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:58.920 [2024-07-25 18:45:59.374776] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.920 [2024-07-25 18:45:59.377110] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.920 [2024-07-25 18:45:59.377195] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.920 [2024-07-25 18:45:59.377394] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:18:58.920 [2024-07-25 18:45:59.377404] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:58.920 [2024-07-25 18:45:59.377564] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:58.920 [2024-07-25 18:45:59.378017] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:18:58.920 [2024-07-25 18:45:59.378036] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:18:58.920 [2024-07-25 18:45:59.378235] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.920 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.178 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.178 "name": "raid_bdev1", 00:18:59.178 "uuid": "fd1fbea6-4e8e-42ff-9895-0abe4699d696", 00:18:59.178 "strip_size_kb": 64, 00:18:59.178 "state": "online", 00:18:59.178 "raid_level": "raid0", 00:18:59.178 "superblock": true, 00:18:59.178 "num_base_bdevs": 3, 00:18:59.178 "num_base_bdevs_discovered": 3, 00:18:59.178 "num_base_bdevs_operational": 3, 00:18:59.178 "base_bdevs_list": [ 00:18:59.178 { 00:18:59.178 "name": "BaseBdev1", 00:18:59.178 "uuid": "72053069-56ca-57b4-a313-9ea1a4b638e8", 00:18:59.178 "is_configured": true, 00:18:59.178 "data_offset": 2048, 00:18:59.178 "data_size": 63488 00:18:59.178 }, 00:18:59.178 { 00:18:59.178 "name": "BaseBdev2", 00:18:59.178 "uuid": "b6ae072e-d8db-54ca-bbfa-3a2e3d0a63e5", 00:18:59.178 "is_configured": true, 00:18:59.178 "data_offset": 2048, 00:18:59.178 "data_size": 63488 00:18:59.178 }, 00:18:59.178 { 00:18:59.178 "name": "BaseBdev3", 00:18:59.178 "uuid": "7b88f79d-8c3d-5682-bf5c-9b51a9e6ef40", 00:18:59.178 "is_configured": true, 00:18:59.178 "data_offset": 2048, 00:18:59.178 "data_size": 63488 00:18:59.178 } 00:18:59.178 ] 00:18:59.178 }' 00:18:59.179 18:45:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.179 18:45:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.746 18:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:59.746 18:46:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:59.746 [2024-07-25 18:46:00.304981] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:00.682 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.940 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.199 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.199 "name": "raid_bdev1", 00:19:01.199 "uuid": "fd1fbea6-4e8e-42ff-9895-0abe4699d696", 00:19:01.199 "strip_size_kb": 64, 00:19:01.199 "state": "online", 00:19:01.199 "raid_level": "raid0", 00:19:01.199 "superblock": true, 00:19:01.199 "num_base_bdevs": 3, 00:19:01.199 "num_base_bdevs_discovered": 3, 00:19:01.199 "num_base_bdevs_operational": 3, 00:19:01.199 "base_bdevs_list": [ 00:19:01.199 { 00:19:01.199 "name": "BaseBdev1", 00:19:01.199 "uuid": "72053069-56ca-57b4-a313-9ea1a4b638e8", 00:19:01.199 "is_configured": true, 00:19:01.199 "data_offset": 2048, 00:19:01.199 "data_size": 63488 00:19:01.199 }, 00:19:01.199 { 00:19:01.199 "name": "BaseBdev2", 00:19:01.199 "uuid": "b6ae072e-d8db-54ca-bbfa-3a2e3d0a63e5", 00:19:01.199 "is_configured": true, 00:19:01.199 "data_offset": 2048, 00:19:01.199 "data_size": 63488 00:19:01.199 }, 00:19:01.199 { 00:19:01.199 "name": "BaseBdev3", 00:19:01.199 "uuid": "7b88f79d-8c3d-5682-bf5c-9b51a9e6ef40", 00:19:01.199 "is_configured": true, 00:19:01.199 "data_offset": 2048, 00:19:01.199 "data_size": 63488 00:19:01.199 } 00:19:01.199 ] 00:19:01.199 }' 00:19:01.199 18:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.199 18:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:02.135 [2024-07-25 18:46:02.638345] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.135 [2024-07-25 18:46:02.638392] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.135 [2024-07-25 18:46:02.640948] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.135 [2024-07-25 18:46:02.641000] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.135 [2024-07-25 18:46:02.641037] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.135 [2024-07-25 18:46:02.641045] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:19:02.135 0 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 127355 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 127355 ']' 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 127355 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127355 00:19:02.135 killing process with pid 127355 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127355' 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 127355 00:19:02.135 18:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 127355 00:19:02.135 [2024-07-25 18:46:02.685914] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.394 [2024-07-25 18:46:02.942663] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.dkcLZvAVPW 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:04.327 ************************************ 00:19:04.327 END TEST raid_write_error_test 00:19:04.327 ************************************ 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.43 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.43 != \0\.\0\0 ]] 00:19:04.327 00:19:04.327 real 0m8.103s 00:19:04.327 user 0m11.503s 00:19:04.327 sys 0m1.272s 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.327 18:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.327 18:46:04 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:19:04.327 18:46:04 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:19:04.327 18:46:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:04.327 18:46:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.327 18:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.327 ************************************ 00:19:04.327 START TEST raid_state_function_test 00:19:04.327 ************************************ 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=127560 00:19:04.327 Process raid pid: 127560 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 127560' 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 127560 /var/tmp/spdk-raid.sock 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 127560 ']' 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.327 18:46:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.327 [2024-07-25 18:46:04.664727] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:04.327 [2024-07-25 18:46:04.664974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.327 [2024-07-25 18:46:04.851839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.617 [2024-07-25 18:46:05.065770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.875 [2024-07-25 18:46:05.260352] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.134 18:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.134 18:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:05.134 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:05.392 [2024-07-25 18:46:05.776062] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:05.392 [2024-07-25 18:46:05.776175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:05.392 [2024-07-25 18:46:05.776187] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.392 [2024-07-25 18:46:05.776217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.392 [2024-07-25 18:46:05.776226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:05.392 [2024-07-25 18:46:05.776244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:05.392 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:05.392 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.392 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:05.392 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.392 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.392 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:05.392 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.393 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.393 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.393 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.393 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.393 18:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.651 18:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.651 "name": "Existed_Raid", 00:19:05.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.651 "strip_size_kb": 64, 00:19:05.651 "state": "configuring", 00:19:05.651 "raid_level": "concat", 00:19:05.651 "superblock": false, 00:19:05.651 "num_base_bdevs": 3, 00:19:05.651 "num_base_bdevs_discovered": 0, 00:19:05.651 "num_base_bdevs_operational": 3, 00:19:05.651 "base_bdevs_list": [ 00:19:05.651 { 00:19:05.651 "name": "BaseBdev1", 00:19:05.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.651 "is_configured": false, 00:19:05.651 "data_offset": 0, 00:19:05.651 "data_size": 0 00:19:05.651 }, 00:19:05.651 { 00:19:05.651 "name": "BaseBdev2", 00:19:05.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.651 "is_configured": false, 00:19:05.651 "data_offset": 0, 00:19:05.651 "data_size": 0 00:19:05.651 }, 00:19:05.651 { 00:19:05.651 "name": "BaseBdev3", 00:19:05.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.651 "is_configured": false, 00:19:05.651 "data_offset": 0, 00:19:05.651 "data_size": 0 00:19:05.651 } 00:19:05.651 ] 00:19:05.651 }' 00:19:05.651 18:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.651 18:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.219 18:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:06.219 [2024-07-25 18:46:06.760098] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.219 [2024-07-25 18:46:06.760139] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:19:06.219 18:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:06.477 [2024-07-25 18:46:06.940146] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.477 [2024-07-25 18:46:06.940228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.477 [2024-07-25 18:46:06.940237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.477 [2024-07-25 18:46:06.940256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.477 [2024-07-25 18:46:06.940263] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.477 [2024-07-25 18:46:06.940293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.477 18:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:06.736 [2024-07-25 18:46:07.155090] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.736 BaseBdev1 00:19:06.736 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:06.736 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:06.736 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:06.736 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:06.736 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:06.736 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:06.736 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.994 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:07.254 [ 00:19:07.254 { 00:19:07.254 "name": "BaseBdev1", 00:19:07.254 "aliases": [ 00:19:07.254 "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065" 00:19:07.254 ], 00:19:07.254 "product_name": "Malloc disk", 00:19:07.254 "block_size": 512, 00:19:07.254 "num_blocks": 65536, 00:19:07.254 "uuid": "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065", 00:19:07.254 "assigned_rate_limits": { 00:19:07.254 "rw_ios_per_sec": 0, 00:19:07.254 "rw_mbytes_per_sec": 0, 00:19:07.254 "r_mbytes_per_sec": 0, 00:19:07.254 "w_mbytes_per_sec": 0 00:19:07.254 }, 00:19:07.254 "claimed": true, 00:19:07.254 "claim_type": "exclusive_write", 00:19:07.254 "zoned": false, 00:19:07.254 "supported_io_types": { 00:19:07.254 "read": true, 00:19:07.254 "write": true, 00:19:07.254 "unmap": true, 00:19:07.254 "flush": true, 00:19:07.254 "reset": true, 00:19:07.254 "nvme_admin": false, 00:19:07.254 "nvme_io": false, 00:19:07.254 "nvme_io_md": false, 00:19:07.254 "write_zeroes": true, 00:19:07.254 "zcopy": true, 00:19:07.254 "get_zone_info": false, 00:19:07.254 "zone_management": false, 00:19:07.254 "zone_append": false, 00:19:07.254 "compare": false, 00:19:07.254 "compare_and_write": false, 00:19:07.254 "abort": true, 00:19:07.254 "seek_hole": false, 00:19:07.254 "seek_data": false, 00:19:07.254 "copy": true, 00:19:07.254 "nvme_iov_md": false 00:19:07.254 }, 00:19:07.254 "memory_domains": [ 00:19:07.254 { 00:19:07.254 "dma_device_id": "system", 00:19:07.254 "dma_device_type": 1 00:19:07.254 }, 00:19:07.254 { 00:19:07.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.254 "dma_device_type": 2 00:19:07.254 } 00:19:07.254 ], 00:19:07.254 "driver_specific": {} 00:19:07.254 } 00:19:07.254 ] 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.254 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.513 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.513 "name": "Existed_Raid", 00:19:07.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.513 "strip_size_kb": 64, 00:19:07.513 "state": "configuring", 00:19:07.513 "raid_level": "concat", 00:19:07.513 "superblock": false, 00:19:07.513 "num_base_bdevs": 3, 00:19:07.513 "num_base_bdevs_discovered": 1, 00:19:07.513 "num_base_bdevs_operational": 3, 00:19:07.513 "base_bdevs_list": [ 00:19:07.513 { 00:19:07.513 "name": "BaseBdev1", 00:19:07.513 "uuid": "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065", 00:19:07.513 "is_configured": true, 00:19:07.513 "data_offset": 0, 00:19:07.513 "data_size": 65536 00:19:07.513 }, 00:19:07.513 { 00:19:07.513 "name": "BaseBdev2", 00:19:07.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.513 "is_configured": false, 00:19:07.513 "data_offset": 0, 00:19:07.513 "data_size": 0 00:19:07.513 }, 00:19:07.513 { 00:19:07.513 "name": "BaseBdev3", 00:19:07.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.513 "is_configured": false, 00:19:07.513 "data_offset": 0, 00:19:07.513 "data_size": 0 00:19:07.513 } 00:19:07.513 ] 00:19:07.513 }' 00:19:07.513 18:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.513 18:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.080 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:08.338 [2024-07-25 18:46:08.659365] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.338 [2024-07-25 18:46:08.659573] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:19:08.338 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:08.338 [2024-07-25 18:46:08.915437] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.597 [2024-07-25 18:46:08.917858] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.597 [2024-07-25 18:46:08.918047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.597 [2024-07-25 18:46:08.918129] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.597 [2024-07-25 18:46:08.918206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.597 18:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.597 18:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.597 "name": "Existed_Raid", 00:19:08.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.597 "strip_size_kb": 64, 00:19:08.597 "state": "configuring", 00:19:08.597 "raid_level": "concat", 00:19:08.597 "superblock": false, 00:19:08.597 "num_base_bdevs": 3, 00:19:08.597 "num_base_bdevs_discovered": 1, 00:19:08.597 "num_base_bdevs_operational": 3, 00:19:08.597 "base_bdevs_list": [ 00:19:08.597 { 00:19:08.597 "name": "BaseBdev1", 00:19:08.597 "uuid": "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065", 00:19:08.597 "is_configured": true, 00:19:08.597 "data_offset": 0, 00:19:08.597 "data_size": 65536 00:19:08.597 }, 00:19:08.597 { 00:19:08.597 "name": "BaseBdev2", 00:19:08.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.597 "is_configured": false, 00:19:08.597 "data_offset": 0, 00:19:08.597 "data_size": 0 00:19:08.597 }, 00:19:08.597 { 00:19:08.597 "name": "BaseBdev3", 00:19:08.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.597 "is_configured": false, 00:19:08.597 "data_offset": 0, 00:19:08.597 "data_size": 0 00:19:08.597 } 00:19:08.597 ] 00:19:08.597 }' 00:19:08.597 18:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.597 18:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.163 18:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:09.422 [2024-07-25 18:46:09.921203] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.422 BaseBdev2 00:19:09.422 18:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:09.422 18:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:09.422 18:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:09.422 18:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:09.422 18:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:09.422 18:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:09.422 18:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.680 18:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:09.939 [ 00:19:09.939 { 00:19:09.939 "name": "BaseBdev2", 00:19:09.939 "aliases": [ 00:19:09.939 "3259afd4-b29b-4a82-bdfa-cf9149f8bfff" 00:19:09.939 ], 00:19:09.939 "product_name": "Malloc disk", 00:19:09.939 "block_size": 512, 00:19:09.939 "num_blocks": 65536, 00:19:09.939 "uuid": "3259afd4-b29b-4a82-bdfa-cf9149f8bfff", 00:19:09.939 "assigned_rate_limits": { 00:19:09.939 "rw_ios_per_sec": 0, 00:19:09.939 "rw_mbytes_per_sec": 0, 00:19:09.939 "r_mbytes_per_sec": 0, 00:19:09.939 "w_mbytes_per_sec": 0 00:19:09.939 }, 00:19:09.939 "claimed": true, 00:19:09.939 "claim_type": "exclusive_write", 00:19:09.939 "zoned": false, 00:19:09.939 "supported_io_types": { 00:19:09.939 "read": true, 00:19:09.939 "write": true, 00:19:09.939 "unmap": true, 00:19:09.939 "flush": true, 00:19:09.939 "reset": true, 00:19:09.939 "nvme_admin": false, 00:19:09.939 "nvme_io": false, 00:19:09.939 "nvme_io_md": false, 00:19:09.939 "write_zeroes": true, 00:19:09.939 "zcopy": true, 00:19:09.939 "get_zone_info": false, 00:19:09.939 "zone_management": false, 00:19:09.939 "zone_append": false, 00:19:09.939 "compare": false, 00:19:09.939 "compare_and_write": false, 00:19:09.939 "abort": true, 00:19:09.939 "seek_hole": false, 00:19:09.939 "seek_data": false, 00:19:09.939 "copy": true, 00:19:09.939 "nvme_iov_md": false 00:19:09.939 }, 00:19:09.939 "memory_domains": [ 00:19:09.939 { 00:19:09.939 "dma_device_id": "system", 00:19:09.939 "dma_device_type": 1 00:19:09.939 }, 00:19:09.939 { 00:19:09.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.939 "dma_device_type": 2 00:19:09.939 } 00:19:09.939 ], 00:19:09.939 "driver_specific": {} 00:19:09.939 } 00:19:09.939 ] 00:19:09.939 18:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:09.939 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:09.939 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:09.939 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:09.939 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.939 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.939 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.940 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.198 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.198 "name": "Existed_Raid", 00:19:10.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.198 "strip_size_kb": 64, 00:19:10.198 "state": "configuring", 00:19:10.198 "raid_level": "concat", 00:19:10.198 "superblock": false, 00:19:10.198 "num_base_bdevs": 3, 00:19:10.198 "num_base_bdevs_discovered": 2, 00:19:10.198 "num_base_bdevs_operational": 3, 00:19:10.198 "base_bdevs_list": [ 00:19:10.198 { 00:19:10.198 "name": "BaseBdev1", 00:19:10.198 "uuid": "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065", 00:19:10.198 "is_configured": true, 00:19:10.198 "data_offset": 0, 00:19:10.198 "data_size": 65536 00:19:10.198 }, 00:19:10.198 { 00:19:10.198 "name": "BaseBdev2", 00:19:10.198 "uuid": "3259afd4-b29b-4a82-bdfa-cf9149f8bfff", 00:19:10.198 "is_configured": true, 00:19:10.198 "data_offset": 0, 00:19:10.198 "data_size": 65536 00:19:10.198 }, 00:19:10.198 { 00:19:10.198 "name": "BaseBdev3", 00:19:10.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.198 "is_configured": false, 00:19:10.198 "data_offset": 0, 00:19:10.198 "data_size": 0 00:19:10.198 } 00:19:10.198 ] 00:19:10.198 }' 00:19:10.198 18:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.198 18:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:11.133 [2024-07-25 18:46:11.622268] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:11.133 [2024-07-25 18:46:11.622317] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:19:11.133 [2024-07-25 18:46:11.622341] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:11.133 [2024-07-25 18:46:11.622468] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:11.133 [2024-07-25 18:46:11.622828] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:19:11.133 [2024-07-25 18:46:11.622846] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:19:11.133 [2024-07-25 18:46:11.623096] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.133 BaseBdev3 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:11.133 18:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.391 18:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:11.650 [ 00:19:11.650 { 00:19:11.650 "name": "BaseBdev3", 00:19:11.650 "aliases": [ 00:19:11.650 "b603f1b4-d3ce-4ec5-8413-f4836515efc0" 00:19:11.650 ], 00:19:11.650 "product_name": "Malloc disk", 00:19:11.650 "block_size": 512, 00:19:11.650 "num_blocks": 65536, 00:19:11.650 "uuid": "b603f1b4-d3ce-4ec5-8413-f4836515efc0", 00:19:11.650 "assigned_rate_limits": { 00:19:11.650 "rw_ios_per_sec": 0, 00:19:11.650 "rw_mbytes_per_sec": 0, 00:19:11.650 "r_mbytes_per_sec": 0, 00:19:11.650 "w_mbytes_per_sec": 0 00:19:11.650 }, 00:19:11.650 "claimed": true, 00:19:11.650 "claim_type": "exclusive_write", 00:19:11.650 "zoned": false, 00:19:11.650 "supported_io_types": { 00:19:11.650 "read": true, 00:19:11.650 "write": true, 00:19:11.650 "unmap": true, 00:19:11.650 "flush": true, 00:19:11.650 "reset": true, 00:19:11.650 "nvme_admin": false, 00:19:11.650 "nvme_io": false, 00:19:11.650 "nvme_io_md": false, 00:19:11.650 "write_zeroes": true, 00:19:11.650 "zcopy": true, 00:19:11.650 "get_zone_info": false, 00:19:11.650 "zone_management": false, 00:19:11.650 "zone_append": false, 00:19:11.650 "compare": false, 00:19:11.650 "compare_and_write": false, 00:19:11.650 "abort": true, 00:19:11.650 "seek_hole": false, 00:19:11.650 "seek_data": false, 00:19:11.650 "copy": true, 00:19:11.650 "nvme_iov_md": false 00:19:11.650 }, 00:19:11.650 "memory_domains": [ 00:19:11.650 { 00:19:11.650 "dma_device_id": "system", 00:19:11.650 "dma_device_type": 1 00:19:11.650 }, 00:19:11.650 { 00:19:11.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.650 "dma_device_type": 2 00:19:11.650 } 00:19:11.650 ], 00:19:11.650 "driver_specific": {} 00:19:11.650 } 00:19:11.650 ] 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.650 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.909 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.909 "name": "Existed_Raid", 00:19:11.909 "uuid": "c4509671-c584-4802-8d9c-bae568fe3bfa", 00:19:11.909 "strip_size_kb": 64, 00:19:11.909 "state": "online", 00:19:11.909 "raid_level": "concat", 00:19:11.909 "superblock": false, 00:19:11.909 "num_base_bdevs": 3, 00:19:11.909 "num_base_bdevs_discovered": 3, 00:19:11.909 "num_base_bdevs_operational": 3, 00:19:11.909 "base_bdevs_list": [ 00:19:11.909 { 00:19:11.909 "name": "BaseBdev1", 00:19:11.909 "uuid": "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065", 00:19:11.909 "is_configured": true, 00:19:11.909 "data_offset": 0, 00:19:11.909 "data_size": 65536 00:19:11.909 }, 00:19:11.909 { 00:19:11.909 "name": "BaseBdev2", 00:19:11.909 "uuid": "3259afd4-b29b-4a82-bdfa-cf9149f8bfff", 00:19:11.909 "is_configured": true, 00:19:11.909 "data_offset": 0, 00:19:11.909 "data_size": 65536 00:19:11.909 }, 00:19:11.909 { 00:19:11.909 "name": "BaseBdev3", 00:19:11.909 "uuid": "b603f1b4-d3ce-4ec5-8413-f4836515efc0", 00:19:11.909 "is_configured": true, 00:19:11.909 "data_offset": 0, 00:19:11.909 "data_size": 65536 00:19:11.909 } 00:19:11.909 ] 00:19:11.909 }' 00:19:11.909 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.909 18:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:12.476 [2024-07-25 18:46:12.980241] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:12.476 "name": "Existed_Raid", 00:19:12.476 "aliases": [ 00:19:12.476 "c4509671-c584-4802-8d9c-bae568fe3bfa" 00:19:12.476 ], 00:19:12.476 "product_name": "Raid Volume", 00:19:12.476 "block_size": 512, 00:19:12.476 "num_blocks": 196608, 00:19:12.476 "uuid": "c4509671-c584-4802-8d9c-bae568fe3bfa", 00:19:12.476 "assigned_rate_limits": { 00:19:12.476 "rw_ios_per_sec": 0, 00:19:12.476 "rw_mbytes_per_sec": 0, 00:19:12.476 "r_mbytes_per_sec": 0, 00:19:12.476 "w_mbytes_per_sec": 0 00:19:12.476 }, 00:19:12.476 "claimed": false, 00:19:12.476 "zoned": false, 00:19:12.476 "supported_io_types": { 00:19:12.476 "read": true, 00:19:12.476 "write": true, 00:19:12.476 "unmap": true, 00:19:12.476 "flush": true, 00:19:12.476 "reset": true, 00:19:12.476 "nvme_admin": false, 00:19:12.476 "nvme_io": false, 00:19:12.476 "nvme_io_md": false, 00:19:12.476 "write_zeroes": true, 00:19:12.476 "zcopy": false, 00:19:12.476 "get_zone_info": false, 00:19:12.476 "zone_management": false, 00:19:12.476 "zone_append": false, 00:19:12.476 "compare": false, 00:19:12.476 "compare_and_write": false, 00:19:12.476 "abort": false, 00:19:12.476 "seek_hole": false, 00:19:12.476 "seek_data": false, 00:19:12.476 "copy": false, 00:19:12.476 "nvme_iov_md": false 00:19:12.476 }, 00:19:12.476 "memory_domains": [ 00:19:12.476 { 00:19:12.476 "dma_device_id": "system", 00:19:12.476 "dma_device_type": 1 00:19:12.476 }, 00:19:12.476 { 00:19:12.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.476 "dma_device_type": 2 00:19:12.476 }, 00:19:12.476 { 00:19:12.476 "dma_device_id": "system", 00:19:12.476 "dma_device_type": 1 00:19:12.476 }, 00:19:12.476 { 00:19:12.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.476 "dma_device_type": 2 00:19:12.476 }, 00:19:12.476 { 00:19:12.476 "dma_device_id": "system", 00:19:12.476 "dma_device_type": 1 00:19:12.476 }, 00:19:12.476 { 00:19:12.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.476 "dma_device_type": 2 00:19:12.476 } 00:19:12.476 ], 00:19:12.476 "driver_specific": { 00:19:12.476 "raid": { 00:19:12.476 "uuid": "c4509671-c584-4802-8d9c-bae568fe3bfa", 00:19:12.476 "strip_size_kb": 64, 00:19:12.476 "state": "online", 00:19:12.476 "raid_level": "concat", 00:19:12.476 "superblock": false, 00:19:12.476 "num_base_bdevs": 3, 00:19:12.476 "num_base_bdevs_discovered": 3, 00:19:12.476 "num_base_bdevs_operational": 3, 00:19:12.476 "base_bdevs_list": [ 00:19:12.476 { 00:19:12.476 "name": "BaseBdev1", 00:19:12.476 "uuid": "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065", 00:19:12.476 "is_configured": true, 00:19:12.476 "data_offset": 0, 00:19:12.476 "data_size": 65536 00:19:12.476 }, 00:19:12.476 { 00:19:12.476 "name": "BaseBdev2", 00:19:12.476 "uuid": "3259afd4-b29b-4a82-bdfa-cf9149f8bfff", 00:19:12.476 "is_configured": true, 00:19:12.476 "data_offset": 0, 00:19:12.476 "data_size": 65536 00:19:12.476 }, 00:19:12.476 { 00:19:12.476 "name": "BaseBdev3", 00:19:12.476 "uuid": "b603f1b4-d3ce-4ec5-8413-f4836515efc0", 00:19:12.476 "is_configured": true, 00:19:12.476 "data_offset": 0, 00:19:12.476 "data_size": 65536 00:19:12.476 } 00:19:12.476 ] 00:19:12.476 } 00:19:12.476 } 00:19:12.476 }' 00:19:12.476 18:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:12.476 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:12.476 BaseBdev2 00:19:12.476 BaseBdev3' 00:19:12.476 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:12.476 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:12.476 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:12.735 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:12.735 "name": "BaseBdev1", 00:19:12.735 "aliases": [ 00:19:12.735 "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065" 00:19:12.735 ], 00:19:12.735 "product_name": "Malloc disk", 00:19:12.735 "block_size": 512, 00:19:12.735 "num_blocks": 65536, 00:19:12.735 "uuid": "4f025b8b-b9d3-4dbc-984b-7dfc0c92d065", 00:19:12.735 "assigned_rate_limits": { 00:19:12.735 "rw_ios_per_sec": 0, 00:19:12.735 "rw_mbytes_per_sec": 0, 00:19:12.735 "r_mbytes_per_sec": 0, 00:19:12.735 "w_mbytes_per_sec": 0 00:19:12.735 }, 00:19:12.735 "claimed": true, 00:19:12.735 "claim_type": "exclusive_write", 00:19:12.735 "zoned": false, 00:19:12.735 "supported_io_types": { 00:19:12.735 "read": true, 00:19:12.735 "write": true, 00:19:12.735 "unmap": true, 00:19:12.735 "flush": true, 00:19:12.735 "reset": true, 00:19:12.735 "nvme_admin": false, 00:19:12.735 "nvme_io": false, 00:19:12.735 "nvme_io_md": false, 00:19:12.735 "write_zeroes": true, 00:19:12.735 "zcopy": true, 00:19:12.735 "get_zone_info": false, 00:19:12.735 "zone_management": false, 00:19:12.735 "zone_append": false, 00:19:12.735 "compare": false, 00:19:12.735 "compare_and_write": false, 00:19:12.735 "abort": true, 00:19:12.735 "seek_hole": false, 00:19:12.735 "seek_data": false, 00:19:12.735 "copy": true, 00:19:12.735 "nvme_iov_md": false 00:19:12.735 }, 00:19:12.735 "memory_domains": [ 00:19:12.735 { 00:19:12.735 "dma_device_id": "system", 00:19:12.735 "dma_device_type": 1 00:19:12.735 }, 00:19:12.735 { 00:19:12.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.735 "dma_device_type": 2 00:19:12.735 } 00:19:12.735 ], 00:19:12.735 "driver_specific": {} 00:19:12.735 }' 00:19:12.735 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:12.735 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:13.000 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:13.568 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:13.568 "name": "BaseBdev2", 00:19:13.568 "aliases": [ 00:19:13.568 "3259afd4-b29b-4a82-bdfa-cf9149f8bfff" 00:19:13.568 ], 00:19:13.568 "product_name": "Malloc disk", 00:19:13.568 "block_size": 512, 00:19:13.568 "num_blocks": 65536, 00:19:13.568 "uuid": "3259afd4-b29b-4a82-bdfa-cf9149f8bfff", 00:19:13.568 "assigned_rate_limits": { 00:19:13.568 "rw_ios_per_sec": 0, 00:19:13.568 "rw_mbytes_per_sec": 0, 00:19:13.568 "r_mbytes_per_sec": 0, 00:19:13.568 "w_mbytes_per_sec": 0 00:19:13.568 }, 00:19:13.568 "claimed": true, 00:19:13.568 "claim_type": "exclusive_write", 00:19:13.568 "zoned": false, 00:19:13.568 "supported_io_types": { 00:19:13.568 "read": true, 00:19:13.568 "write": true, 00:19:13.568 "unmap": true, 00:19:13.568 "flush": true, 00:19:13.568 "reset": true, 00:19:13.568 "nvme_admin": false, 00:19:13.568 "nvme_io": false, 00:19:13.568 "nvme_io_md": false, 00:19:13.568 "write_zeroes": true, 00:19:13.568 "zcopy": true, 00:19:13.568 "get_zone_info": false, 00:19:13.568 "zone_management": false, 00:19:13.568 "zone_append": false, 00:19:13.568 "compare": false, 00:19:13.568 "compare_and_write": false, 00:19:13.568 "abort": true, 00:19:13.568 "seek_hole": false, 00:19:13.568 "seek_data": false, 00:19:13.568 "copy": true, 00:19:13.568 "nvme_iov_md": false 00:19:13.568 }, 00:19:13.568 "memory_domains": [ 00:19:13.568 { 00:19:13.568 "dma_device_id": "system", 00:19:13.568 "dma_device_type": 1 00:19:13.568 }, 00:19:13.568 { 00:19:13.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.568 "dma_device_type": 2 00:19:13.568 } 00:19:13.568 ], 00:19:13.568 "driver_specific": {} 00:19:13.568 }' 00:19:13.568 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:13.568 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:13.568 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:13.568 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.568 18:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.568 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:13.568 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.568 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:13.568 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:13.568 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.827 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:13.827 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:13.827 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:13.827 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:13.827 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:14.086 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:14.086 "name": "BaseBdev3", 00:19:14.086 "aliases": [ 00:19:14.086 "b603f1b4-d3ce-4ec5-8413-f4836515efc0" 00:19:14.086 ], 00:19:14.086 "product_name": "Malloc disk", 00:19:14.086 "block_size": 512, 00:19:14.086 "num_blocks": 65536, 00:19:14.086 "uuid": "b603f1b4-d3ce-4ec5-8413-f4836515efc0", 00:19:14.086 "assigned_rate_limits": { 00:19:14.086 "rw_ios_per_sec": 0, 00:19:14.086 "rw_mbytes_per_sec": 0, 00:19:14.086 "r_mbytes_per_sec": 0, 00:19:14.086 "w_mbytes_per_sec": 0 00:19:14.086 }, 00:19:14.086 "claimed": true, 00:19:14.086 "claim_type": "exclusive_write", 00:19:14.086 "zoned": false, 00:19:14.086 "supported_io_types": { 00:19:14.086 "read": true, 00:19:14.086 "write": true, 00:19:14.086 "unmap": true, 00:19:14.086 "flush": true, 00:19:14.086 "reset": true, 00:19:14.086 "nvme_admin": false, 00:19:14.086 "nvme_io": false, 00:19:14.086 "nvme_io_md": false, 00:19:14.086 "write_zeroes": true, 00:19:14.086 "zcopy": true, 00:19:14.086 "get_zone_info": false, 00:19:14.086 "zone_management": false, 00:19:14.086 "zone_append": false, 00:19:14.086 "compare": false, 00:19:14.086 "compare_and_write": false, 00:19:14.086 "abort": true, 00:19:14.086 "seek_hole": false, 00:19:14.086 "seek_data": false, 00:19:14.086 "copy": true, 00:19:14.086 "nvme_iov_md": false 00:19:14.086 }, 00:19:14.086 "memory_domains": [ 00:19:14.086 { 00:19:14.086 "dma_device_id": "system", 00:19:14.086 "dma_device_type": 1 00:19:14.086 }, 00:19:14.086 { 00:19:14.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.086 "dma_device_type": 2 00:19:14.086 } 00:19:14.086 ], 00:19:14.086 "driver_specific": {} 00:19:14.086 }' 00:19:14.086 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:14.086 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:14.086 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:14.086 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:14.086 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:14.344 18:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:14.603 [2024-07-25 18:46:15.120440] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:14.603 [2024-07-25 18:46:15.120484] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.603 [2024-07-25 18:46:15.120546] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:14.861 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.862 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.862 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.862 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.862 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.862 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.862 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.862 "name": "Existed_Raid", 00:19:14.862 "uuid": "c4509671-c584-4802-8d9c-bae568fe3bfa", 00:19:14.862 "strip_size_kb": 64, 00:19:14.862 "state": "offline", 00:19:14.862 "raid_level": "concat", 00:19:14.862 "superblock": false, 00:19:14.862 "num_base_bdevs": 3, 00:19:15.121 "num_base_bdevs_discovered": 2, 00:19:15.121 "num_base_bdevs_operational": 2, 00:19:15.121 "base_bdevs_list": [ 00:19:15.121 { 00:19:15.121 "name": null, 00:19:15.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.121 "is_configured": false, 00:19:15.121 "data_offset": 0, 00:19:15.121 "data_size": 65536 00:19:15.121 }, 00:19:15.121 { 00:19:15.121 "name": "BaseBdev2", 00:19:15.121 "uuid": "3259afd4-b29b-4a82-bdfa-cf9149f8bfff", 00:19:15.121 "is_configured": true, 00:19:15.121 "data_offset": 0, 00:19:15.121 "data_size": 65536 00:19:15.121 }, 00:19:15.121 { 00:19:15.121 "name": "BaseBdev3", 00:19:15.121 "uuid": "b603f1b4-d3ce-4ec5-8413-f4836515efc0", 00:19:15.121 "is_configured": true, 00:19:15.121 "data_offset": 0, 00:19:15.121 "data_size": 65536 00:19:15.121 } 00:19:15.121 ] 00:19:15.121 }' 00:19:15.121 18:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.121 18:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.689 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:15.689 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:15.689 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.689 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:15.689 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:15.689 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.689 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:15.948 [2024-07-25 18:46:16.389645] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.948 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:15.948 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:15.948 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.948 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:16.207 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:16.207 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:16.207 18:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:16.466 [2024-07-25 18:46:16.998893] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:16.466 [2024-07-25 18:46:16.998964] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:19:16.725 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:16.725 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:16.725 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:16.725 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.983 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:16.983 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:16.983 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:16.983 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:16.983 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:16.983 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:17.242 BaseBdev2 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.242 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:17.501 [ 00:19:17.501 { 00:19:17.501 "name": "BaseBdev2", 00:19:17.501 "aliases": [ 00:19:17.501 "af13d134-113f-42d4-946a-fa08ae40f1cc" 00:19:17.501 ], 00:19:17.501 "product_name": "Malloc disk", 00:19:17.501 "block_size": 512, 00:19:17.501 "num_blocks": 65536, 00:19:17.501 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:17.501 "assigned_rate_limits": { 00:19:17.501 "rw_ios_per_sec": 0, 00:19:17.501 "rw_mbytes_per_sec": 0, 00:19:17.501 "r_mbytes_per_sec": 0, 00:19:17.501 "w_mbytes_per_sec": 0 00:19:17.501 }, 00:19:17.501 "claimed": false, 00:19:17.501 "zoned": false, 00:19:17.501 "supported_io_types": { 00:19:17.501 "read": true, 00:19:17.501 "write": true, 00:19:17.501 "unmap": true, 00:19:17.501 "flush": true, 00:19:17.501 "reset": true, 00:19:17.501 "nvme_admin": false, 00:19:17.501 "nvme_io": false, 00:19:17.501 "nvme_io_md": false, 00:19:17.501 "write_zeroes": true, 00:19:17.501 "zcopy": true, 00:19:17.501 "get_zone_info": false, 00:19:17.501 "zone_management": false, 00:19:17.501 "zone_append": false, 00:19:17.501 "compare": false, 00:19:17.501 "compare_and_write": false, 00:19:17.501 "abort": true, 00:19:17.501 "seek_hole": false, 00:19:17.501 "seek_data": false, 00:19:17.501 "copy": true, 00:19:17.501 "nvme_iov_md": false 00:19:17.501 }, 00:19:17.501 "memory_domains": [ 00:19:17.501 { 00:19:17.501 "dma_device_id": "system", 00:19:17.501 "dma_device_type": 1 00:19:17.501 }, 00:19:17.501 { 00:19:17.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.501 "dma_device_type": 2 00:19:17.501 } 00:19:17.501 ], 00:19:17.501 "driver_specific": {} 00:19:17.501 } 00:19:17.501 ] 00:19:17.501 18:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:17.501 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:17.501 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:17.501 18:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:17.760 BaseBdev3 00:19:17.760 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:17.760 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:17.760 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:17.760 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:17.760 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:17.760 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:17.760 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.018 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:18.277 [ 00:19:18.277 { 00:19:18.277 "name": "BaseBdev3", 00:19:18.277 "aliases": [ 00:19:18.277 "b67c6d12-fa4e-4dd0-9322-9e521e5101a0" 00:19:18.277 ], 00:19:18.277 "product_name": "Malloc disk", 00:19:18.277 "block_size": 512, 00:19:18.277 "num_blocks": 65536, 00:19:18.277 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:18.277 "assigned_rate_limits": { 00:19:18.277 "rw_ios_per_sec": 0, 00:19:18.277 "rw_mbytes_per_sec": 0, 00:19:18.277 "r_mbytes_per_sec": 0, 00:19:18.277 "w_mbytes_per_sec": 0 00:19:18.277 }, 00:19:18.277 "claimed": false, 00:19:18.277 "zoned": false, 00:19:18.277 "supported_io_types": { 00:19:18.277 "read": true, 00:19:18.277 "write": true, 00:19:18.277 "unmap": true, 00:19:18.277 "flush": true, 00:19:18.277 "reset": true, 00:19:18.277 "nvme_admin": false, 00:19:18.277 "nvme_io": false, 00:19:18.277 "nvme_io_md": false, 00:19:18.277 "write_zeroes": true, 00:19:18.277 "zcopy": true, 00:19:18.277 "get_zone_info": false, 00:19:18.277 "zone_management": false, 00:19:18.277 "zone_append": false, 00:19:18.277 "compare": false, 00:19:18.277 "compare_and_write": false, 00:19:18.277 "abort": true, 00:19:18.277 "seek_hole": false, 00:19:18.277 "seek_data": false, 00:19:18.277 "copy": true, 00:19:18.277 "nvme_iov_md": false 00:19:18.277 }, 00:19:18.277 "memory_domains": [ 00:19:18.277 { 00:19:18.277 "dma_device_id": "system", 00:19:18.277 "dma_device_type": 1 00:19:18.277 }, 00:19:18.277 { 00:19:18.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.277 "dma_device_type": 2 00:19:18.277 } 00:19:18.277 ], 00:19:18.277 "driver_specific": {} 00:19:18.277 } 00:19:18.277 ] 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:18.277 [2024-07-25 18:46:18.832226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:18.277 [2024-07-25 18:46:18.832299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:18.277 [2024-07-25 18:46:18.832343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.277 [2024-07-25 18:46:18.834575] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.277 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.536 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.536 18:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.536 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.536 "name": "Existed_Raid", 00:19:18.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.536 "strip_size_kb": 64, 00:19:18.536 "state": "configuring", 00:19:18.536 "raid_level": "concat", 00:19:18.536 "superblock": false, 00:19:18.536 "num_base_bdevs": 3, 00:19:18.536 "num_base_bdevs_discovered": 2, 00:19:18.536 "num_base_bdevs_operational": 3, 00:19:18.536 "base_bdevs_list": [ 00:19:18.536 { 00:19:18.536 "name": "BaseBdev1", 00:19:18.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.536 "is_configured": false, 00:19:18.536 "data_offset": 0, 00:19:18.536 "data_size": 0 00:19:18.536 }, 00:19:18.536 { 00:19:18.536 "name": "BaseBdev2", 00:19:18.536 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:18.536 "is_configured": true, 00:19:18.536 "data_offset": 0, 00:19:18.536 "data_size": 65536 00:19:18.536 }, 00:19:18.536 { 00:19:18.536 "name": "BaseBdev3", 00:19:18.536 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:18.536 "is_configured": true, 00:19:18.536 "data_offset": 0, 00:19:18.536 "data_size": 65536 00:19:18.536 } 00:19:18.536 ] 00:19:18.536 }' 00:19:18.536 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.536 18:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.104 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:19.362 [2024-07-25 18:46:19.872404] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.362 18:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.657 18:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.657 "name": "Existed_Raid", 00:19:19.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.657 "strip_size_kb": 64, 00:19:19.657 "state": "configuring", 00:19:19.657 "raid_level": "concat", 00:19:19.657 "superblock": false, 00:19:19.657 "num_base_bdevs": 3, 00:19:19.657 "num_base_bdevs_discovered": 1, 00:19:19.657 "num_base_bdevs_operational": 3, 00:19:19.657 "base_bdevs_list": [ 00:19:19.657 { 00:19:19.657 "name": "BaseBdev1", 00:19:19.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.657 "is_configured": false, 00:19:19.657 "data_offset": 0, 00:19:19.657 "data_size": 0 00:19:19.657 }, 00:19:19.657 { 00:19:19.657 "name": null, 00:19:19.657 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:19.657 "is_configured": false, 00:19:19.657 "data_offset": 0, 00:19:19.657 "data_size": 65536 00:19:19.657 }, 00:19:19.657 { 00:19:19.657 "name": "BaseBdev3", 00:19:19.657 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:19.657 "is_configured": true, 00:19:19.657 "data_offset": 0, 00:19:19.657 "data_size": 65536 00:19:19.657 } 00:19:19.657 ] 00:19:19.657 }' 00:19:19.657 18:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.657 18:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.246 18:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:20.246 18:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.504 18:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:20.504 18:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:20.763 [2024-07-25 18:46:21.157552] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.763 BaseBdev1 00:19:20.763 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:20.763 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:20.763 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:20.763 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:20.763 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:20.763 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:20.763 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:21.022 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:21.280 [ 00:19:21.281 { 00:19:21.281 "name": "BaseBdev1", 00:19:21.281 "aliases": [ 00:19:21.281 "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429" 00:19:21.281 ], 00:19:21.281 "product_name": "Malloc disk", 00:19:21.281 "block_size": 512, 00:19:21.281 "num_blocks": 65536, 00:19:21.281 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:21.281 "assigned_rate_limits": { 00:19:21.281 "rw_ios_per_sec": 0, 00:19:21.281 "rw_mbytes_per_sec": 0, 00:19:21.281 "r_mbytes_per_sec": 0, 00:19:21.281 "w_mbytes_per_sec": 0 00:19:21.281 }, 00:19:21.281 "claimed": true, 00:19:21.281 "claim_type": "exclusive_write", 00:19:21.281 "zoned": false, 00:19:21.281 "supported_io_types": { 00:19:21.281 "read": true, 00:19:21.281 "write": true, 00:19:21.281 "unmap": true, 00:19:21.281 "flush": true, 00:19:21.281 "reset": true, 00:19:21.281 "nvme_admin": false, 00:19:21.281 "nvme_io": false, 00:19:21.281 "nvme_io_md": false, 00:19:21.281 "write_zeroes": true, 00:19:21.281 "zcopy": true, 00:19:21.281 "get_zone_info": false, 00:19:21.281 "zone_management": false, 00:19:21.281 "zone_append": false, 00:19:21.281 "compare": false, 00:19:21.281 "compare_and_write": false, 00:19:21.281 "abort": true, 00:19:21.281 "seek_hole": false, 00:19:21.281 "seek_data": false, 00:19:21.281 "copy": true, 00:19:21.281 "nvme_iov_md": false 00:19:21.281 }, 00:19:21.281 "memory_domains": [ 00:19:21.281 { 00:19:21.281 "dma_device_id": "system", 00:19:21.281 "dma_device_type": 1 00:19:21.281 }, 00:19:21.281 { 00:19:21.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.281 "dma_device_type": 2 00:19:21.281 } 00:19:21.281 ], 00:19:21.281 "driver_specific": {} 00:19:21.281 } 00:19:21.281 ] 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.281 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.539 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.539 "name": "Existed_Raid", 00:19:21.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.539 "strip_size_kb": 64, 00:19:21.539 "state": "configuring", 00:19:21.539 "raid_level": "concat", 00:19:21.539 "superblock": false, 00:19:21.539 "num_base_bdevs": 3, 00:19:21.539 "num_base_bdevs_discovered": 2, 00:19:21.539 "num_base_bdevs_operational": 3, 00:19:21.539 "base_bdevs_list": [ 00:19:21.539 { 00:19:21.539 "name": "BaseBdev1", 00:19:21.539 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:21.539 "is_configured": true, 00:19:21.539 "data_offset": 0, 00:19:21.539 "data_size": 65536 00:19:21.539 }, 00:19:21.539 { 00:19:21.539 "name": null, 00:19:21.539 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:21.539 "is_configured": false, 00:19:21.539 "data_offset": 0, 00:19:21.539 "data_size": 65536 00:19:21.539 }, 00:19:21.539 { 00:19:21.539 "name": "BaseBdev3", 00:19:21.539 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:21.539 "is_configured": true, 00:19:21.539 "data_offset": 0, 00:19:21.539 "data_size": 65536 00:19:21.539 } 00:19:21.539 ] 00:19:21.539 }' 00:19:21.539 18:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.539 18:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.105 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.105 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:22.364 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:22.364 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:22.622 [2024-07-25 18:46:22.966520] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.622 18:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.881 18:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.881 "name": "Existed_Raid", 00:19:22.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.881 "strip_size_kb": 64, 00:19:22.881 "state": "configuring", 00:19:22.881 "raid_level": "concat", 00:19:22.881 "superblock": false, 00:19:22.881 "num_base_bdevs": 3, 00:19:22.881 "num_base_bdevs_discovered": 1, 00:19:22.881 "num_base_bdevs_operational": 3, 00:19:22.881 "base_bdevs_list": [ 00:19:22.881 { 00:19:22.881 "name": "BaseBdev1", 00:19:22.881 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:22.881 "is_configured": true, 00:19:22.881 "data_offset": 0, 00:19:22.881 "data_size": 65536 00:19:22.881 }, 00:19:22.881 { 00:19:22.881 "name": null, 00:19:22.881 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:22.881 "is_configured": false, 00:19:22.881 "data_offset": 0, 00:19:22.881 "data_size": 65536 00:19:22.881 }, 00:19:22.881 { 00:19:22.881 "name": null, 00:19:22.881 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:22.881 "is_configured": false, 00:19:22.881 "data_offset": 0, 00:19:22.881 "data_size": 65536 00:19:22.881 } 00:19:22.881 ] 00:19:22.881 }' 00:19:22.881 18:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.881 18:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.447 18:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:23.447 18:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.706 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:23.706 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:23.965 [2024-07-25 18:46:24.326756] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.965 "name": "Existed_Raid", 00:19:23.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.965 "strip_size_kb": 64, 00:19:23.965 "state": "configuring", 00:19:23.965 "raid_level": "concat", 00:19:23.965 "superblock": false, 00:19:23.965 "num_base_bdevs": 3, 00:19:23.965 "num_base_bdevs_discovered": 2, 00:19:23.965 "num_base_bdevs_operational": 3, 00:19:23.965 "base_bdevs_list": [ 00:19:23.965 { 00:19:23.965 "name": "BaseBdev1", 00:19:23.965 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:23.965 "is_configured": true, 00:19:23.965 "data_offset": 0, 00:19:23.965 "data_size": 65536 00:19:23.965 }, 00:19:23.965 { 00:19:23.965 "name": null, 00:19:23.965 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:23.965 "is_configured": false, 00:19:23.965 "data_offset": 0, 00:19:23.965 "data_size": 65536 00:19:23.965 }, 00:19:23.965 { 00:19:23.965 "name": "BaseBdev3", 00:19:23.965 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:23.965 "is_configured": true, 00:19:23.965 "data_offset": 0, 00:19:23.965 "data_size": 65536 00:19:23.965 } 00:19:23.965 ] 00:19:23.965 }' 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.965 18:46:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.533 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.533 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:25.100 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:25.100 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:25.100 [2024-07-25 18:46:25.591044] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.359 "name": "Existed_Raid", 00:19:25.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.359 "strip_size_kb": 64, 00:19:25.359 "state": "configuring", 00:19:25.359 "raid_level": "concat", 00:19:25.359 "superblock": false, 00:19:25.359 "num_base_bdevs": 3, 00:19:25.359 "num_base_bdevs_discovered": 1, 00:19:25.359 "num_base_bdevs_operational": 3, 00:19:25.359 "base_bdevs_list": [ 00:19:25.359 { 00:19:25.359 "name": null, 00:19:25.359 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:25.359 "is_configured": false, 00:19:25.359 "data_offset": 0, 00:19:25.359 "data_size": 65536 00:19:25.359 }, 00:19:25.359 { 00:19:25.359 "name": null, 00:19:25.359 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:25.359 "is_configured": false, 00:19:25.359 "data_offset": 0, 00:19:25.359 "data_size": 65536 00:19:25.359 }, 00:19:25.359 { 00:19:25.359 "name": "BaseBdev3", 00:19:25.359 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:25.359 "is_configured": true, 00:19:25.359 "data_offset": 0, 00:19:25.359 "data_size": 65536 00:19:25.359 } 00:19:25.359 ] 00:19:25.359 }' 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.359 18:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.925 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.926 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:26.184 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:26.184 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:26.442 [2024-07-25 18:46:26.835138] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.442 18:46:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.700 18:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.700 "name": "Existed_Raid", 00:19:26.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.700 "strip_size_kb": 64, 00:19:26.700 "state": "configuring", 00:19:26.700 "raid_level": "concat", 00:19:26.700 "superblock": false, 00:19:26.700 "num_base_bdevs": 3, 00:19:26.700 "num_base_bdevs_discovered": 2, 00:19:26.700 "num_base_bdevs_operational": 3, 00:19:26.700 "base_bdevs_list": [ 00:19:26.700 { 00:19:26.700 "name": null, 00:19:26.700 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:26.700 "is_configured": false, 00:19:26.700 "data_offset": 0, 00:19:26.700 "data_size": 65536 00:19:26.700 }, 00:19:26.700 { 00:19:26.700 "name": "BaseBdev2", 00:19:26.700 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:26.700 "is_configured": true, 00:19:26.700 "data_offset": 0, 00:19:26.700 "data_size": 65536 00:19:26.700 }, 00:19:26.700 { 00:19:26.700 "name": "BaseBdev3", 00:19:26.700 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:26.700 "is_configured": true, 00:19:26.700 "data_offset": 0, 00:19:26.700 "data_size": 65536 00:19:26.700 } 00:19:26.700 ] 00:19:26.700 }' 00:19:26.701 18:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.701 18:46:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.267 18:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.267 18:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:27.525 18:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:27.525 18:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:27.525 18:46:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.783 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a2cdafe0-22fe-43fc-8af8-5f4e07b0c429 00:19:28.041 [2024-07-25 18:46:28.472102] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:28.041 [2024-07-25 18:46:28.472152] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:19:28.041 [2024-07-25 18:46:28.472160] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:28.041 [2024-07-25 18:46:28.472265] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:28.041 [2024-07-25 18:46:28.472591] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:19:28.041 [2024-07-25 18:46:28.472609] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:19:28.041 [2024-07-25 18:46:28.472840] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.041 NewBaseBdev 00:19:28.041 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:28.041 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:28.041 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:28.041 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:28.041 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:28.041 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:28.041 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.299 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:28.557 [ 00:19:28.557 { 00:19:28.557 "name": "NewBaseBdev", 00:19:28.557 "aliases": [ 00:19:28.557 "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429" 00:19:28.557 ], 00:19:28.557 "product_name": "Malloc disk", 00:19:28.557 "block_size": 512, 00:19:28.557 "num_blocks": 65536, 00:19:28.557 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:28.557 "assigned_rate_limits": { 00:19:28.557 "rw_ios_per_sec": 0, 00:19:28.557 "rw_mbytes_per_sec": 0, 00:19:28.557 "r_mbytes_per_sec": 0, 00:19:28.557 "w_mbytes_per_sec": 0 00:19:28.557 }, 00:19:28.557 "claimed": true, 00:19:28.557 "claim_type": "exclusive_write", 00:19:28.557 "zoned": false, 00:19:28.557 "supported_io_types": { 00:19:28.557 "read": true, 00:19:28.557 "write": true, 00:19:28.557 "unmap": true, 00:19:28.557 "flush": true, 00:19:28.557 "reset": true, 00:19:28.557 "nvme_admin": false, 00:19:28.557 "nvme_io": false, 00:19:28.557 "nvme_io_md": false, 00:19:28.557 "write_zeroes": true, 00:19:28.557 "zcopy": true, 00:19:28.557 "get_zone_info": false, 00:19:28.557 "zone_management": false, 00:19:28.557 "zone_append": false, 00:19:28.557 "compare": false, 00:19:28.557 "compare_and_write": false, 00:19:28.557 "abort": true, 00:19:28.557 "seek_hole": false, 00:19:28.557 "seek_data": false, 00:19:28.557 "copy": true, 00:19:28.557 "nvme_iov_md": false 00:19:28.557 }, 00:19:28.557 "memory_domains": [ 00:19:28.557 { 00:19:28.557 "dma_device_id": "system", 00:19:28.557 "dma_device_type": 1 00:19:28.557 }, 00:19:28.557 { 00:19:28.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.557 "dma_device_type": 2 00:19:28.557 } 00:19:28.557 ], 00:19:28.557 "driver_specific": {} 00:19:28.557 } 00:19:28.557 ] 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.557 18:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.814 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.814 "name": "Existed_Raid", 00:19:28.814 "uuid": "bc135091-308e-46bc-b574-ae0411b579e7", 00:19:28.814 "strip_size_kb": 64, 00:19:28.814 "state": "online", 00:19:28.814 "raid_level": "concat", 00:19:28.814 "superblock": false, 00:19:28.814 "num_base_bdevs": 3, 00:19:28.814 "num_base_bdevs_discovered": 3, 00:19:28.814 "num_base_bdevs_operational": 3, 00:19:28.814 "base_bdevs_list": [ 00:19:28.814 { 00:19:28.814 "name": "NewBaseBdev", 00:19:28.814 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:28.814 "is_configured": true, 00:19:28.814 "data_offset": 0, 00:19:28.814 "data_size": 65536 00:19:28.814 }, 00:19:28.814 { 00:19:28.814 "name": "BaseBdev2", 00:19:28.814 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:28.814 "is_configured": true, 00:19:28.814 "data_offset": 0, 00:19:28.814 "data_size": 65536 00:19:28.814 }, 00:19:28.814 { 00:19:28.814 "name": "BaseBdev3", 00:19:28.814 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:28.814 "is_configured": true, 00:19:28.814 "data_offset": 0, 00:19:28.814 "data_size": 65536 00:19:28.814 } 00:19:28.814 ] 00:19:28.814 }' 00:19:28.814 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.814 18:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:29.380 [2024-07-25 18:46:29.892630] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:29.380 "name": "Existed_Raid", 00:19:29.380 "aliases": [ 00:19:29.380 "bc135091-308e-46bc-b574-ae0411b579e7" 00:19:29.380 ], 00:19:29.380 "product_name": "Raid Volume", 00:19:29.380 "block_size": 512, 00:19:29.380 "num_blocks": 196608, 00:19:29.380 "uuid": "bc135091-308e-46bc-b574-ae0411b579e7", 00:19:29.380 "assigned_rate_limits": { 00:19:29.380 "rw_ios_per_sec": 0, 00:19:29.380 "rw_mbytes_per_sec": 0, 00:19:29.380 "r_mbytes_per_sec": 0, 00:19:29.380 "w_mbytes_per_sec": 0 00:19:29.380 }, 00:19:29.380 "claimed": false, 00:19:29.380 "zoned": false, 00:19:29.380 "supported_io_types": { 00:19:29.380 "read": true, 00:19:29.380 "write": true, 00:19:29.380 "unmap": true, 00:19:29.380 "flush": true, 00:19:29.380 "reset": true, 00:19:29.380 "nvme_admin": false, 00:19:29.380 "nvme_io": false, 00:19:29.380 "nvme_io_md": false, 00:19:29.380 "write_zeroes": true, 00:19:29.380 "zcopy": false, 00:19:29.380 "get_zone_info": false, 00:19:29.380 "zone_management": false, 00:19:29.380 "zone_append": false, 00:19:29.380 "compare": false, 00:19:29.380 "compare_and_write": false, 00:19:29.380 "abort": false, 00:19:29.380 "seek_hole": false, 00:19:29.380 "seek_data": false, 00:19:29.380 "copy": false, 00:19:29.380 "nvme_iov_md": false 00:19:29.380 }, 00:19:29.380 "memory_domains": [ 00:19:29.380 { 00:19:29.380 "dma_device_id": "system", 00:19:29.380 "dma_device_type": 1 00:19:29.380 }, 00:19:29.380 { 00:19:29.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.380 "dma_device_type": 2 00:19:29.380 }, 00:19:29.380 { 00:19:29.380 "dma_device_id": "system", 00:19:29.380 "dma_device_type": 1 00:19:29.380 }, 00:19:29.380 { 00:19:29.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.380 "dma_device_type": 2 00:19:29.380 }, 00:19:29.380 { 00:19:29.380 "dma_device_id": "system", 00:19:29.380 "dma_device_type": 1 00:19:29.380 }, 00:19:29.380 { 00:19:29.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.380 "dma_device_type": 2 00:19:29.380 } 00:19:29.380 ], 00:19:29.380 "driver_specific": { 00:19:29.380 "raid": { 00:19:29.380 "uuid": "bc135091-308e-46bc-b574-ae0411b579e7", 00:19:29.380 "strip_size_kb": 64, 00:19:29.380 "state": "online", 00:19:29.380 "raid_level": "concat", 00:19:29.380 "superblock": false, 00:19:29.380 "num_base_bdevs": 3, 00:19:29.380 "num_base_bdevs_discovered": 3, 00:19:29.380 "num_base_bdevs_operational": 3, 00:19:29.380 "base_bdevs_list": [ 00:19:29.380 { 00:19:29.380 "name": "NewBaseBdev", 00:19:29.380 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:29.380 "is_configured": true, 00:19:29.380 "data_offset": 0, 00:19:29.380 "data_size": 65536 00:19:29.380 }, 00:19:29.380 { 00:19:29.380 "name": "BaseBdev2", 00:19:29.380 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:29.380 "is_configured": true, 00:19:29.380 "data_offset": 0, 00:19:29.380 "data_size": 65536 00:19:29.380 }, 00:19:29.380 { 00:19:29.380 "name": "BaseBdev3", 00:19:29.380 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:29.380 "is_configured": true, 00:19:29.380 "data_offset": 0, 00:19:29.380 "data_size": 65536 00:19:29.380 } 00:19:29.380 ] 00:19:29.380 } 00:19:29.380 } 00:19:29.380 }' 00:19:29.380 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:29.638 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:29.638 BaseBdev2 00:19:29.638 BaseBdev3' 00:19:29.638 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:29.638 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:29.638 18:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:29.638 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:29.638 "name": "NewBaseBdev", 00:19:29.638 "aliases": [ 00:19:29.638 "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429" 00:19:29.638 ], 00:19:29.638 "product_name": "Malloc disk", 00:19:29.638 "block_size": 512, 00:19:29.638 "num_blocks": 65536, 00:19:29.638 "uuid": "a2cdafe0-22fe-43fc-8af8-5f4e07b0c429", 00:19:29.638 "assigned_rate_limits": { 00:19:29.638 "rw_ios_per_sec": 0, 00:19:29.638 "rw_mbytes_per_sec": 0, 00:19:29.638 "r_mbytes_per_sec": 0, 00:19:29.638 "w_mbytes_per_sec": 0 00:19:29.638 }, 00:19:29.638 "claimed": true, 00:19:29.638 "claim_type": "exclusive_write", 00:19:29.638 "zoned": false, 00:19:29.638 "supported_io_types": { 00:19:29.638 "read": true, 00:19:29.638 "write": true, 00:19:29.638 "unmap": true, 00:19:29.638 "flush": true, 00:19:29.638 "reset": true, 00:19:29.638 "nvme_admin": false, 00:19:29.638 "nvme_io": false, 00:19:29.638 "nvme_io_md": false, 00:19:29.638 "write_zeroes": true, 00:19:29.638 "zcopy": true, 00:19:29.638 "get_zone_info": false, 00:19:29.638 "zone_management": false, 00:19:29.638 "zone_append": false, 00:19:29.638 "compare": false, 00:19:29.638 "compare_and_write": false, 00:19:29.638 "abort": true, 00:19:29.638 "seek_hole": false, 00:19:29.638 "seek_data": false, 00:19:29.638 "copy": true, 00:19:29.638 "nvme_iov_md": false 00:19:29.638 }, 00:19:29.638 "memory_domains": [ 00:19:29.638 { 00:19:29.638 "dma_device_id": "system", 00:19:29.638 "dma_device_type": 1 00:19:29.638 }, 00:19:29.638 { 00:19:29.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.638 "dma_device_type": 2 00:19:29.638 } 00:19:29.638 ], 00:19:29.638 "driver_specific": {} 00:19:29.638 }' 00:19:29.638 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:29.897 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:29.897 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:29.897 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:29.897 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:29.897 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:29.897 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:29.897 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.155 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.155 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.155 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.155 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.155 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.155 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:30.155 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.413 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.413 "name": "BaseBdev2", 00:19:30.413 "aliases": [ 00:19:30.413 "af13d134-113f-42d4-946a-fa08ae40f1cc" 00:19:30.413 ], 00:19:30.413 "product_name": "Malloc disk", 00:19:30.413 "block_size": 512, 00:19:30.413 "num_blocks": 65536, 00:19:30.413 "uuid": "af13d134-113f-42d4-946a-fa08ae40f1cc", 00:19:30.413 "assigned_rate_limits": { 00:19:30.413 "rw_ios_per_sec": 0, 00:19:30.413 "rw_mbytes_per_sec": 0, 00:19:30.413 "r_mbytes_per_sec": 0, 00:19:30.413 "w_mbytes_per_sec": 0 00:19:30.413 }, 00:19:30.413 "claimed": true, 00:19:30.413 "claim_type": "exclusive_write", 00:19:30.413 "zoned": false, 00:19:30.413 "supported_io_types": { 00:19:30.413 "read": true, 00:19:30.413 "write": true, 00:19:30.413 "unmap": true, 00:19:30.413 "flush": true, 00:19:30.413 "reset": true, 00:19:30.413 "nvme_admin": false, 00:19:30.413 "nvme_io": false, 00:19:30.413 "nvme_io_md": false, 00:19:30.413 "write_zeroes": true, 00:19:30.413 "zcopy": true, 00:19:30.413 "get_zone_info": false, 00:19:30.413 "zone_management": false, 00:19:30.413 "zone_append": false, 00:19:30.413 "compare": false, 00:19:30.413 "compare_and_write": false, 00:19:30.413 "abort": true, 00:19:30.413 "seek_hole": false, 00:19:30.413 "seek_data": false, 00:19:30.413 "copy": true, 00:19:30.413 "nvme_iov_md": false 00:19:30.413 }, 00:19:30.413 "memory_domains": [ 00:19:30.413 { 00:19:30.413 "dma_device_id": "system", 00:19:30.413 "dma_device_type": 1 00:19:30.413 }, 00:19:30.413 { 00:19:30.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.413 "dma_device_type": 2 00:19:30.413 } 00:19:30.413 ], 00:19:30.413 "driver_specific": {} 00:19:30.413 }' 00:19:30.413 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.413 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.413 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:30.413 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.413 18:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:30.671 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.929 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.929 "name": "BaseBdev3", 00:19:30.929 "aliases": [ 00:19:30.929 "b67c6d12-fa4e-4dd0-9322-9e521e5101a0" 00:19:30.929 ], 00:19:30.929 "product_name": "Malloc disk", 00:19:30.929 "block_size": 512, 00:19:30.929 "num_blocks": 65536, 00:19:30.929 "uuid": "b67c6d12-fa4e-4dd0-9322-9e521e5101a0", 00:19:30.929 "assigned_rate_limits": { 00:19:30.929 "rw_ios_per_sec": 0, 00:19:30.929 "rw_mbytes_per_sec": 0, 00:19:30.929 "r_mbytes_per_sec": 0, 00:19:30.929 "w_mbytes_per_sec": 0 00:19:30.929 }, 00:19:30.929 "claimed": true, 00:19:30.929 "claim_type": "exclusive_write", 00:19:30.929 "zoned": false, 00:19:30.929 "supported_io_types": { 00:19:30.930 "read": true, 00:19:30.930 "write": true, 00:19:30.930 "unmap": true, 00:19:30.930 "flush": true, 00:19:30.930 "reset": true, 00:19:30.930 "nvme_admin": false, 00:19:30.930 "nvme_io": false, 00:19:30.930 "nvme_io_md": false, 00:19:30.930 "write_zeroes": true, 00:19:30.930 "zcopy": true, 00:19:30.930 "get_zone_info": false, 00:19:30.930 "zone_management": false, 00:19:30.930 "zone_append": false, 00:19:30.930 "compare": false, 00:19:30.930 "compare_and_write": false, 00:19:30.930 "abort": true, 00:19:30.930 "seek_hole": false, 00:19:30.930 "seek_data": false, 00:19:30.930 "copy": true, 00:19:30.930 "nvme_iov_md": false 00:19:30.930 }, 00:19:30.930 "memory_domains": [ 00:19:30.930 { 00:19:30.930 "dma_device_id": "system", 00:19:30.930 "dma_device_type": 1 00:19:30.930 }, 00:19:30.930 { 00:19:30.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.930 "dma_device_type": 2 00:19:30.930 } 00:19:30.930 ], 00:19:30.930 "driver_specific": {} 00:19:30.930 }' 00:19:30.930 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.930 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.188 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.446 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.446 18:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:31.705 [2024-07-25 18:46:32.050279] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.705 [2024-07-25 18:46:32.050314] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.705 [2024-07-25 18:46:32.050398] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.705 [2024-07-25 18:46:32.050459] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.705 [2024-07-25 18:46:32.050468] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 127560 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 127560 ']' 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 127560 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 127560 00:19:31.705 killing process with pid 127560 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 127560' 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 127560 00:19:31.705 [2024-07-25 18:46:32.094462] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.705 18:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 127560 00:19:31.963 [2024-07-25 18:46:32.333523] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.899 ************************************ 00:19:32.899 END TEST raid_state_function_test 00:19:32.899 ************************************ 00:19:32.899 18:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:32.899 00:19:32.899 real 0m28.843s 00:19:32.899 user 0m51.777s 00:19:32.899 sys 0m4.788s 00:19:32.899 18:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.899 18:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.899 18:46:33 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:19:32.899 18:46:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:32.899 18:46:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.899 18:46:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.159 ************************************ 00:19:33.159 START TEST raid_state_function_test_sb 00:19:33.159 ************************************ 00:19:33.159 18:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:19:33.159 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:33.159 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:33.159 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=128532 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 128532' 00:19:33.160 Process raid pid: 128532 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 128532 /var/tmp/spdk-raid.sock 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 128532 ']' 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:33.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.160 18:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.160 [2024-07-25 18:46:33.584255] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:33.160 [2024-07-25 18:46:33.584482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.418 [2024-07-25 18:46:33.772738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.675 [2024-07-25 18:46:34.002702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.675 [2024-07-25 18:46:34.199017] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:34.241 [2024-07-25 18:46:34.677863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:34.241 [2024-07-25 18:46:34.677961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:34.241 [2024-07-25 18:46:34.677971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:34.241 [2024-07-25 18:46:34.678000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:34.241 [2024-07-25 18:46:34.678008] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:34.241 [2024-07-25 18:46:34.678025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.241 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.501 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.501 "name": "Existed_Raid", 00:19:34.501 "uuid": "1c98d38a-45a9-4d70-b81c-b36ac3cc3467", 00:19:34.501 "strip_size_kb": 64, 00:19:34.501 "state": "configuring", 00:19:34.501 "raid_level": "concat", 00:19:34.501 "superblock": true, 00:19:34.501 "num_base_bdevs": 3, 00:19:34.501 "num_base_bdevs_discovered": 0, 00:19:34.501 "num_base_bdevs_operational": 3, 00:19:34.501 "base_bdevs_list": [ 00:19:34.501 { 00:19:34.501 "name": "BaseBdev1", 00:19:34.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.501 "is_configured": false, 00:19:34.501 "data_offset": 0, 00:19:34.501 "data_size": 0 00:19:34.501 }, 00:19:34.501 { 00:19:34.501 "name": "BaseBdev2", 00:19:34.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.501 "is_configured": false, 00:19:34.501 "data_offset": 0, 00:19:34.501 "data_size": 0 00:19:34.501 }, 00:19:34.501 { 00:19:34.501 "name": "BaseBdev3", 00:19:34.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.501 "is_configured": false, 00:19:34.501 "data_offset": 0, 00:19:34.501 "data_size": 0 00:19:34.501 } 00:19:34.501 ] 00:19:34.501 }' 00:19:34.501 18:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.501 18:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.121 18:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:35.121 [2024-07-25 18:46:35.693936] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:35.121 [2024-07-25 18:46:35.693983] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:19:35.378 18:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:35.635 [2024-07-25 18:46:35.969981] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:35.635 [2024-07-25 18:46:35.970049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:35.635 [2024-07-25 18:46:35.970059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:35.635 [2024-07-25 18:46:35.970077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:35.635 [2024-07-25 18:46:35.970083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:35.635 [2024-07-25 18:46:35.970106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:35.635 18:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:35.635 [2024-07-25 18:46:36.181048] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.635 BaseBdev1 00:19:35.635 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:35.635 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:35.635 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:35.635 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:35.635 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:35.636 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:35.636 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.892 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:36.150 [ 00:19:36.150 { 00:19:36.150 "name": "BaseBdev1", 00:19:36.150 "aliases": [ 00:19:36.150 "7a99eb01-6fef-4851-ba81-275878962db8" 00:19:36.150 ], 00:19:36.150 "product_name": "Malloc disk", 00:19:36.150 "block_size": 512, 00:19:36.150 "num_blocks": 65536, 00:19:36.150 "uuid": "7a99eb01-6fef-4851-ba81-275878962db8", 00:19:36.150 "assigned_rate_limits": { 00:19:36.150 "rw_ios_per_sec": 0, 00:19:36.150 "rw_mbytes_per_sec": 0, 00:19:36.150 "r_mbytes_per_sec": 0, 00:19:36.150 "w_mbytes_per_sec": 0 00:19:36.150 }, 00:19:36.150 "claimed": true, 00:19:36.150 "claim_type": "exclusive_write", 00:19:36.150 "zoned": false, 00:19:36.150 "supported_io_types": { 00:19:36.150 "read": true, 00:19:36.150 "write": true, 00:19:36.150 "unmap": true, 00:19:36.150 "flush": true, 00:19:36.150 "reset": true, 00:19:36.150 "nvme_admin": false, 00:19:36.150 "nvme_io": false, 00:19:36.150 "nvme_io_md": false, 00:19:36.150 "write_zeroes": true, 00:19:36.150 "zcopy": true, 00:19:36.150 "get_zone_info": false, 00:19:36.150 "zone_management": false, 00:19:36.150 "zone_append": false, 00:19:36.150 "compare": false, 00:19:36.150 "compare_and_write": false, 00:19:36.150 "abort": true, 00:19:36.150 "seek_hole": false, 00:19:36.150 "seek_data": false, 00:19:36.150 "copy": true, 00:19:36.150 "nvme_iov_md": false 00:19:36.150 }, 00:19:36.150 "memory_domains": [ 00:19:36.150 { 00:19:36.150 "dma_device_id": "system", 00:19:36.150 "dma_device_type": 1 00:19:36.150 }, 00:19:36.150 { 00:19:36.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.150 "dma_device_type": 2 00:19:36.150 } 00:19:36.150 ], 00:19:36.150 "driver_specific": {} 00:19:36.150 } 00:19:36.150 ] 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.150 "name": "Existed_Raid", 00:19:36.150 "uuid": "fecf7f8c-b17a-4195-a2bf-31044febcd3c", 00:19:36.150 "strip_size_kb": 64, 00:19:36.150 "state": "configuring", 00:19:36.150 "raid_level": "concat", 00:19:36.150 "superblock": true, 00:19:36.150 "num_base_bdevs": 3, 00:19:36.150 "num_base_bdevs_discovered": 1, 00:19:36.150 "num_base_bdevs_operational": 3, 00:19:36.150 "base_bdevs_list": [ 00:19:36.150 { 00:19:36.150 "name": "BaseBdev1", 00:19:36.150 "uuid": "7a99eb01-6fef-4851-ba81-275878962db8", 00:19:36.150 "is_configured": true, 00:19:36.150 "data_offset": 2048, 00:19:36.150 "data_size": 63488 00:19:36.150 }, 00:19:36.150 { 00:19:36.150 "name": "BaseBdev2", 00:19:36.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.150 "is_configured": false, 00:19:36.150 "data_offset": 0, 00:19:36.150 "data_size": 0 00:19:36.150 }, 00:19:36.150 { 00:19:36.150 "name": "BaseBdev3", 00:19:36.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.150 "is_configured": false, 00:19:36.150 "data_offset": 0, 00:19:36.150 "data_size": 0 00:19:36.150 } 00:19:36.150 ] 00:19:36.150 }' 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.150 18:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.715 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:36.972 [2024-07-25 18:46:37.433268] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:36.972 [2024-07-25 18:46:37.433328] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:19:36.972 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:37.230 [2024-07-25 18:46:37.601340] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.230 [2024-07-25 18:46:37.603520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.230 [2024-07-25 18:46:37.603602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.230 [2024-07-25 18:46:37.603611] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:37.230 [2024-07-25 18:46:37.603652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.230 "name": "Existed_Raid", 00:19:37.230 "uuid": "c2f826a3-2945-4502-98f7-1c201b4cd7a2", 00:19:37.230 "strip_size_kb": 64, 00:19:37.230 "state": "configuring", 00:19:37.230 "raid_level": "concat", 00:19:37.230 "superblock": true, 00:19:37.230 "num_base_bdevs": 3, 00:19:37.230 "num_base_bdevs_discovered": 1, 00:19:37.230 "num_base_bdevs_operational": 3, 00:19:37.230 "base_bdevs_list": [ 00:19:37.230 { 00:19:37.230 "name": "BaseBdev1", 00:19:37.230 "uuid": "7a99eb01-6fef-4851-ba81-275878962db8", 00:19:37.230 "is_configured": true, 00:19:37.230 "data_offset": 2048, 00:19:37.230 "data_size": 63488 00:19:37.230 }, 00:19:37.230 { 00:19:37.230 "name": "BaseBdev2", 00:19:37.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.230 "is_configured": false, 00:19:37.230 "data_offset": 0, 00:19:37.230 "data_size": 0 00:19:37.230 }, 00:19:37.230 { 00:19:37.230 "name": "BaseBdev3", 00:19:37.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.230 "is_configured": false, 00:19:37.230 "data_offset": 0, 00:19:37.230 "data_size": 0 00:19:37.230 } 00:19:37.230 ] 00:19:37.230 }' 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.230 18:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.795 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:38.052 [2024-07-25 18:46:38.501654] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:38.052 BaseBdev2 00:19:38.052 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:38.052 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:38.052 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:38.052 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:38.052 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:38.052 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:38.052 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:38.310 [ 00:19:38.310 { 00:19:38.310 "name": "BaseBdev2", 00:19:38.310 "aliases": [ 00:19:38.310 "f594395f-9e79-4a04-912e-d0dd8f97d503" 00:19:38.310 ], 00:19:38.310 "product_name": "Malloc disk", 00:19:38.310 "block_size": 512, 00:19:38.310 "num_blocks": 65536, 00:19:38.310 "uuid": "f594395f-9e79-4a04-912e-d0dd8f97d503", 00:19:38.310 "assigned_rate_limits": { 00:19:38.310 "rw_ios_per_sec": 0, 00:19:38.310 "rw_mbytes_per_sec": 0, 00:19:38.310 "r_mbytes_per_sec": 0, 00:19:38.310 "w_mbytes_per_sec": 0 00:19:38.310 }, 00:19:38.310 "claimed": true, 00:19:38.310 "claim_type": "exclusive_write", 00:19:38.310 "zoned": false, 00:19:38.310 "supported_io_types": { 00:19:38.310 "read": true, 00:19:38.310 "write": true, 00:19:38.310 "unmap": true, 00:19:38.310 "flush": true, 00:19:38.310 "reset": true, 00:19:38.310 "nvme_admin": false, 00:19:38.310 "nvme_io": false, 00:19:38.310 "nvme_io_md": false, 00:19:38.310 "write_zeroes": true, 00:19:38.310 "zcopy": true, 00:19:38.310 "get_zone_info": false, 00:19:38.310 "zone_management": false, 00:19:38.310 "zone_append": false, 00:19:38.310 "compare": false, 00:19:38.310 "compare_and_write": false, 00:19:38.310 "abort": true, 00:19:38.310 "seek_hole": false, 00:19:38.310 "seek_data": false, 00:19:38.310 "copy": true, 00:19:38.310 "nvme_iov_md": false 00:19:38.310 }, 00:19:38.310 "memory_domains": [ 00:19:38.310 { 00:19:38.310 "dma_device_id": "system", 00:19:38.310 "dma_device_type": 1 00:19:38.310 }, 00:19:38.310 { 00:19:38.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.310 "dma_device_type": 2 00:19:38.310 } 00:19:38.310 ], 00:19:38.310 "driver_specific": {} 00:19:38.310 } 00:19:38.310 ] 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.310 18:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.567 18:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.567 "name": "Existed_Raid", 00:19:38.567 "uuid": "c2f826a3-2945-4502-98f7-1c201b4cd7a2", 00:19:38.567 "strip_size_kb": 64, 00:19:38.567 "state": "configuring", 00:19:38.567 "raid_level": "concat", 00:19:38.567 "superblock": true, 00:19:38.567 "num_base_bdevs": 3, 00:19:38.567 "num_base_bdevs_discovered": 2, 00:19:38.567 "num_base_bdevs_operational": 3, 00:19:38.567 "base_bdevs_list": [ 00:19:38.567 { 00:19:38.567 "name": "BaseBdev1", 00:19:38.567 "uuid": "7a99eb01-6fef-4851-ba81-275878962db8", 00:19:38.567 "is_configured": true, 00:19:38.567 "data_offset": 2048, 00:19:38.567 "data_size": 63488 00:19:38.567 }, 00:19:38.567 { 00:19:38.568 "name": "BaseBdev2", 00:19:38.568 "uuid": "f594395f-9e79-4a04-912e-d0dd8f97d503", 00:19:38.568 "is_configured": true, 00:19:38.568 "data_offset": 2048, 00:19:38.568 "data_size": 63488 00:19:38.568 }, 00:19:38.568 { 00:19:38.568 "name": "BaseBdev3", 00:19:38.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.568 "is_configured": false, 00:19:38.568 "data_offset": 0, 00:19:38.568 "data_size": 0 00:19:38.568 } 00:19:38.568 ] 00:19:38.568 }' 00:19:38.568 18:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.568 18:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.133 18:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:39.699 [2024-07-25 18:46:39.985888] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:39.699 [2024-07-25 18:46:39.986117] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:19:39.699 [2024-07-25 18:46:39.986129] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:39.699 [2024-07-25 18:46:39.986241] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:39.699 [2024-07-25 18:46:39.986564] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:19:39.699 [2024-07-25 18:46:39.986575] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:19:39.699 [2024-07-25 18:46:39.986730] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.699 BaseBdev3 00:19:39.699 18:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:39.699 18:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:39.699 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:39.699 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:39.699 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:39.699 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:39.699 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:39.699 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:39.957 [ 00:19:39.957 { 00:19:39.957 "name": "BaseBdev3", 00:19:39.957 "aliases": [ 00:19:39.957 "fb6fc49e-4496-4f8d-9416-f4ec8d6cb998" 00:19:39.957 ], 00:19:39.957 "product_name": "Malloc disk", 00:19:39.957 "block_size": 512, 00:19:39.957 "num_blocks": 65536, 00:19:39.957 "uuid": "fb6fc49e-4496-4f8d-9416-f4ec8d6cb998", 00:19:39.957 "assigned_rate_limits": { 00:19:39.957 "rw_ios_per_sec": 0, 00:19:39.957 "rw_mbytes_per_sec": 0, 00:19:39.957 "r_mbytes_per_sec": 0, 00:19:39.957 "w_mbytes_per_sec": 0 00:19:39.957 }, 00:19:39.957 "claimed": true, 00:19:39.957 "claim_type": "exclusive_write", 00:19:39.957 "zoned": false, 00:19:39.957 "supported_io_types": { 00:19:39.957 "read": true, 00:19:39.957 "write": true, 00:19:39.957 "unmap": true, 00:19:39.957 "flush": true, 00:19:39.957 "reset": true, 00:19:39.957 "nvme_admin": false, 00:19:39.957 "nvme_io": false, 00:19:39.958 "nvme_io_md": false, 00:19:39.958 "write_zeroes": true, 00:19:39.958 "zcopy": true, 00:19:39.958 "get_zone_info": false, 00:19:39.958 "zone_management": false, 00:19:39.958 "zone_append": false, 00:19:39.958 "compare": false, 00:19:39.958 "compare_and_write": false, 00:19:39.958 "abort": true, 00:19:39.958 "seek_hole": false, 00:19:39.958 "seek_data": false, 00:19:39.958 "copy": true, 00:19:39.958 "nvme_iov_md": false 00:19:39.958 }, 00:19:39.958 "memory_domains": [ 00:19:39.958 { 00:19:39.958 "dma_device_id": "system", 00:19:39.958 "dma_device_type": 1 00:19:39.958 }, 00:19:39.958 { 00:19:39.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.958 "dma_device_type": 2 00:19:39.958 } 00:19:39.958 ], 00:19:39.958 "driver_specific": {} 00:19:39.958 } 00:19:39.958 ] 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.958 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.216 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.216 "name": "Existed_Raid", 00:19:40.216 "uuid": "c2f826a3-2945-4502-98f7-1c201b4cd7a2", 00:19:40.216 "strip_size_kb": 64, 00:19:40.216 "state": "online", 00:19:40.216 "raid_level": "concat", 00:19:40.216 "superblock": true, 00:19:40.216 "num_base_bdevs": 3, 00:19:40.216 "num_base_bdevs_discovered": 3, 00:19:40.216 "num_base_bdevs_operational": 3, 00:19:40.216 "base_bdevs_list": [ 00:19:40.216 { 00:19:40.216 "name": "BaseBdev1", 00:19:40.216 "uuid": "7a99eb01-6fef-4851-ba81-275878962db8", 00:19:40.216 "is_configured": true, 00:19:40.216 "data_offset": 2048, 00:19:40.216 "data_size": 63488 00:19:40.216 }, 00:19:40.216 { 00:19:40.216 "name": "BaseBdev2", 00:19:40.216 "uuid": "f594395f-9e79-4a04-912e-d0dd8f97d503", 00:19:40.216 "is_configured": true, 00:19:40.216 "data_offset": 2048, 00:19:40.216 "data_size": 63488 00:19:40.216 }, 00:19:40.216 { 00:19:40.216 "name": "BaseBdev3", 00:19:40.216 "uuid": "fb6fc49e-4496-4f8d-9416-f4ec8d6cb998", 00:19:40.216 "is_configured": true, 00:19:40.216 "data_offset": 2048, 00:19:40.216 "data_size": 63488 00:19:40.216 } 00:19:40.216 ] 00:19:40.216 }' 00:19:40.216 18:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.216 18:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:40.783 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:41.041 [2024-07-25 18:46:41.554829] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.041 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:41.041 "name": "Existed_Raid", 00:19:41.041 "aliases": [ 00:19:41.041 "c2f826a3-2945-4502-98f7-1c201b4cd7a2" 00:19:41.041 ], 00:19:41.041 "product_name": "Raid Volume", 00:19:41.041 "block_size": 512, 00:19:41.041 "num_blocks": 190464, 00:19:41.041 "uuid": "c2f826a3-2945-4502-98f7-1c201b4cd7a2", 00:19:41.041 "assigned_rate_limits": { 00:19:41.041 "rw_ios_per_sec": 0, 00:19:41.041 "rw_mbytes_per_sec": 0, 00:19:41.041 "r_mbytes_per_sec": 0, 00:19:41.041 "w_mbytes_per_sec": 0 00:19:41.041 }, 00:19:41.041 "claimed": false, 00:19:41.041 "zoned": false, 00:19:41.041 "supported_io_types": { 00:19:41.041 "read": true, 00:19:41.041 "write": true, 00:19:41.041 "unmap": true, 00:19:41.041 "flush": true, 00:19:41.041 "reset": true, 00:19:41.041 "nvme_admin": false, 00:19:41.041 "nvme_io": false, 00:19:41.041 "nvme_io_md": false, 00:19:41.041 "write_zeroes": true, 00:19:41.041 "zcopy": false, 00:19:41.041 "get_zone_info": false, 00:19:41.041 "zone_management": false, 00:19:41.041 "zone_append": false, 00:19:41.041 "compare": false, 00:19:41.041 "compare_and_write": false, 00:19:41.041 "abort": false, 00:19:41.041 "seek_hole": false, 00:19:41.041 "seek_data": false, 00:19:41.041 "copy": false, 00:19:41.041 "nvme_iov_md": false 00:19:41.041 }, 00:19:41.041 "memory_domains": [ 00:19:41.041 { 00:19:41.041 "dma_device_id": "system", 00:19:41.041 "dma_device_type": 1 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.041 "dma_device_type": 2 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "dma_device_id": "system", 00:19:41.041 "dma_device_type": 1 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.041 "dma_device_type": 2 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "dma_device_id": "system", 00:19:41.041 "dma_device_type": 1 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.041 "dma_device_type": 2 00:19:41.041 } 00:19:41.041 ], 00:19:41.041 "driver_specific": { 00:19:41.041 "raid": { 00:19:41.041 "uuid": "c2f826a3-2945-4502-98f7-1c201b4cd7a2", 00:19:41.041 "strip_size_kb": 64, 00:19:41.041 "state": "online", 00:19:41.041 "raid_level": "concat", 00:19:41.041 "superblock": true, 00:19:41.041 "num_base_bdevs": 3, 00:19:41.041 "num_base_bdevs_discovered": 3, 00:19:41.041 "num_base_bdevs_operational": 3, 00:19:41.041 "base_bdevs_list": [ 00:19:41.041 { 00:19:41.041 "name": "BaseBdev1", 00:19:41.041 "uuid": "7a99eb01-6fef-4851-ba81-275878962db8", 00:19:41.041 "is_configured": true, 00:19:41.041 "data_offset": 2048, 00:19:41.041 "data_size": 63488 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "name": "BaseBdev2", 00:19:41.041 "uuid": "f594395f-9e79-4a04-912e-d0dd8f97d503", 00:19:41.041 "is_configured": true, 00:19:41.041 "data_offset": 2048, 00:19:41.041 "data_size": 63488 00:19:41.041 }, 00:19:41.041 { 00:19:41.041 "name": "BaseBdev3", 00:19:41.041 "uuid": "fb6fc49e-4496-4f8d-9416-f4ec8d6cb998", 00:19:41.041 "is_configured": true, 00:19:41.041 "data_offset": 2048, 00:19:41.041 "data_size": 63488 00:19:41.041 } 00:19:41.041 ] 00:19:41.041 } 00:19:41.041 } 00:19:41.041 }' 00:19:41.041 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:41.299 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:41.299 BaseBdev2 00:19:41.299 BaseBdev3' 00:19:41.299 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:41.299 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:41.299 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:41.558 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:41.558 "name": "BaseBdev1", 00:19:41.558 "aliases": [ 00:19:41.558 "7a99eb01-6fef-4851-ba81-275878962db8" 00:19:41.558 ], 00:19:41.558 "product_name": "Malloc disk", 00:19:41.558 "block_size": 512, 00:19:41.558 "num_blocks": 65536, 00:19:41.558 "uuid": "7a99eb01-6fef-4851-ba81-275878962db8", 00:19:41.558 "assigned_rate_limits": { 00:19:41.558 "rw_ios_per_sec": 0, 00:19:41.558 "rw_mbytes_per_sec": 0, 00:19:41.558 "r_mbytes_per_sec": 0, 00:19:41.558 "w_mbytes_per_sec": 0 00:19:41.558 }, 00:19:41.558 "claimed": true, 00:19:41.558 "claim_type": "exclusive_write", 00:19:41.558 "zoned": false, 00:19:41.558 "supported_io_types": { 00:19:41.558 "read": true, 00:19:41.558 "write": true, 00:19:41.558 "unmap": true, 00:19:41.558 "flush": true, 00:19:41.558 "reset": true, 00:19:41.558 "nvme_admin": false, 00:19:41.558 "nvme_io": false, 00:19:41.558 "nvme_io_md": false, 00:19:41.558 "write_zeroes": true, 00:19:41.558 "zcopy": true, 00:19:41.558 "get_zone_info": false, 00:19:41.558 "zone_management": false, 00:19:41.558 "zone_append": false, 00:19:41.558 "compare": false, 00:19:41.558 "compare_and_write": false, 00:19:41.558 "abort": true, 00:19:41.558 "seek_hole": false, 00:19:41.558 "seek_data": false, 00:19:41.558 "copy": true, 00:19:41.558 "nvme_iov_md": false 00:19:41.558 }, 00:19:41.558 "memory_domains": [ 00:19:41.558 { 00:19:41.558 "dma_device_id": "system", 00:19:41.558 "dma_device_type": 1 00:19:41.558 }, 00:19:41.558 { 00:19:41.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.558 "dma_device_type": 2 00:19:41.558 } 00:19:41.558 ], 00:19:41.558 "driver_specific": {} 00:19:41.558 }' 00:19:41.558 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.558 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.558 18:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:41.558 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.558 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.558 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:41.558 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:41.816 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:42.074 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:42.074 "name": "BaseBdev2", 00:19:42.074 "aliases": [ 00:19:42.075 "f594395f-9e79-4a04-912e-d0dd8f97d503" 00:19:42.075 ], 00:19:42.075 "product_name": "Malloc disk", 00:19:42.075 "block_size": 512, 00:19:42.075 "num_blocks": 65536, 00:19:42.075 "uuid": "f594395f-9e79-4a04-912e-d0dd8f97d503", 00:19:42.075 "assigned_rate_limits": { 00:19:42.075 "rw_ios_per_sec": 0, 00:19:42.075 "rw_mbytes_per_sec": 0, 00:19:42.075 "r_mbytes_per_sec": 0, 00:19:42.075 "w_mbytes_per_sec": 0 00:19:42.075 }, 00:19:42.075 "claimed": true, 00:19:42.075 "claim_type": "exclusive_write", 00:19:42.075 "zoned": false, 00:19:42.075 "supported_io_types": { 00:19:42.075 "read": true, 00:19:42.075 "write": true, 00:19:42.075 "unmap": true, 00:19:42.075 "flush": true, 00:19:42.075 "reset": true, 00:19:42.075 "nvme_admin": false, 00:19:42.075 "nvme_io": false, 00:19:42.075 "nvme_io_md": false, 00:19:42.075 "write_zeroes": true, 00:19:42.075 "zcopy": true, 00:19:42.075 "get_zone_info": false, 00:19:42.075 "zone_management": false, 00:19:42.075 "zone_append": false, 00:19:42.075 "compare": false, 00:19:42.075 "compare_and_write": false, 00:19:42.075 "abort": true, 00:19:42.075 "seek_hole": false, 00:19:42.075 "seek_data": false, 00:19:42.075 "copy": true, 00:19:42.075 "nvme_iov_md": false 00:19:42.075 }, 00:19:42.075 "memory_domains": [ 00:19:42.075 { 00:19:42.075 "dma_device_id": "system", 00:19:42.075 "dma_device_type": 1 00:19:42.075 }, 00:19:42.075 { 00:19:42.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.075 "dma_device_type": 2 00:19:42.075 } 00:19:42.075 ], 00:19:42.075 "driver_specific": {} 00:19:42.075 }' 00:19:42.075 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.075 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.075 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:42.075 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:42.333 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:42.591 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:42.591 18:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:42.850 "name": "BaseBdev3", 00:19:42.850 "aliases": [ 00:19:42.850 "fb6fc49e-4496-4f8d-9416-f4ec8d6cb998" 00:19:42.850 ], 00:19:42.850 "product_name": "Malloc disk", 00:19:42.850 "block_size": 512, 00:19:42.850 "num_blocks": 65536, 00:19:42.850 "uuid": "fb6fc49e-4496-4f8d-9416-f4ec8d6cb998", 00:19:42.850 "assigned_rate_limits": { 00:19:42.850 "rw_ios_per_sec": 0, 00:19:42.850 "rw_mbytes_per_sec": 0, 00:19:42.850 "r_mbytes_per_sec": 0, 00:19:42.850 "w_mbytes_per_sec": 0 00:19:42.850 }, 00:19:42.850 "claimed": true, 00:19:42.850 "claim_type": "exclusive_write", 00:19:42.850 "zoned": false, 00:19:42.850 "supported_io_types": { 00:19:42.850 "read": true, 00:19:42.850 "write": true, 00:19:42.850 "unmap": true, 00:19:42.850 "flush": true, 00:19:42.850 "reset": true, 00:19:42.850 "nvme_admin": false, 00:19:42.850 "nvme_io": false, 00:19:42.850 "nvme_io_md": false, 00:19:42.850 "write_zeroes": true, 00:19:42.850 "zcopy": true, 00:19:42.850 "get_zone_info": false, 00:19:42.850 "zone_management": false, 00:19:42.850 "zone_append": false, 00:19:42.850 "compare": false, 00:19:42.850 "compare_and_write": false, 00:19:42.850 "abort": true, 00:19:42.850 "seek_hole": false, 00:19:42.850 "seek_data": false, 00:19:42.850 "copy": true, 00:19:42.850 "nvme_iov_md": false 00:19:42.850 }, 00:19:42.850 "memory_domains": [ 00:19:42.850 { 00:19:42.850 "dma_device_id": "system", 00:19:42.850 "dma_device_type": 1 00:19:42.850 }, 00:19:42.850 { 00:19:42.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.850 "dma_device_type": 2 00:19:42.850 } 00:19:42.850 ], 00:19:42.850 "driver_specific": {} 00:19:42.850 }' 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:42.850 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:43.106 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:43.106 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.106 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:43.106 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:43.106 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:43.363 [2024-07-25 18:46:43.842992] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.363 [2024-07-25 18:46:43.843032] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.363 [2024-07-25 18:46:43.843116] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.621 18:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.878 18:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.878 "name": "Existed_Raid", 00:19:43.878 "uuid": "c2f826a3-2945-4502-98f7-1c201b4cd7a2", 00:19:43.878 "strip_size_kb": 64, 00:19:43.878 "state": "offline", 00:19:43.878 "raid_level": "concat", 00:19:43.878 "superblock": true, 00:19:43.878 "num_base_bdevs": 3, 00:19:43.878 "num_base_bdevs_discovered": 2, 00:19:43.878 "num_base_bdevs_operational": 2, 00:19:43.878 "base_bdevs_list": [ 00:19:43.878 { 00:19:43.878 "name": null, 00:19:43.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.878 "is_configured": false, 00:19:43.878 "data_offset": 2048, 00:19:43.878 "data_size": 63488 00:19:43.878 }, 00:19:43.878 { 00:19:43.878 "name": "BaseBdev2", 00:19:43.878 "uuid": "f594395f-9e79-4a04-912e-d0dd8f97d503", 00:19:43.878 "is_configured": true, 00:19:43.878 "data_offset": 2048, 00:19:43.878 "data_size": 63488 00:19:43.878 }, 00:19:43.878 { 00:19:43.878 "name": "BaseBdev3", 00:19:43.878 "uuid": "fb6fc49e-4496-4f8d-9416-f4ec8d6cb998", 00:19:43.878 "is_configured": true, 00:19:43.878 "data_offset": 2048, 00:19:43.878 "data_size": 63488 00:19:43.878 } 00:19:43.878 ] 00:19:43.878 }' 00:19:43.878 18:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.878 18:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.444 18:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:44.444 18:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:44.444 18:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.444 18:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:44.702 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:44.702 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:44.702 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:44.702 [2024-07-25 18:46:45.236273] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:44.961 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:44.961 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:44.961 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:44.961 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.221 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:45.221 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:45.221 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:45.480 [2024-07-25 18:46:45.833814] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:45.480 [2024-07-25 18:46:45.833879] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:19:45.480 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:45.480 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:45.480 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.480 18:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:45.739 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:45.739 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:45.739 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:45.739 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:45.739 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:45.739 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:45.999 BaseBdev2 00:19:45.999 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:45.999 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:45.999 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:45.999 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:45.999 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:45.999 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:45.999 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.259 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:46.259 [ 00:19:46.259 { 00:19:46.259 "name": "BaseBdev2", 00:19:46.259 "aliases": [ 00:19:46.259 "c17b3654-2e3f-412d-bf17-ee2b1a9a8554" 00:19:46.259 ], 00:19:46.259 "product_name": "Malloc disk", 00:19:46.259 "block_size": 512, 00:19:46.259 "num_blocks": 65536, 00:19:46.259 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:46.259 "assigned_rate_limits": { 00:19:46.259 "rw_ios_per_sec": 0, 00:19:46.259 "rw_mbytes_per_sec": 0, 00:19:46.259 "r_mbytes_per_sec": 0, 00:19:46.259 "w_mbytes_per_sec": 0 00:19:46.259 }, 00:19:46.259 "claimed": false, 00:19:46.259 "zoned": false, 00:19:46.259 "supported_io_types": { 00:19:46.259 "read": true, 00:19:46.259 "write": true, 00:19:46.259 "unmap": true, 00:19:46.259 "flush": true, 00:19:46.259 "reset": true, 00:19:46.259 "nvme_admin": false, 00:19:46.259 "nvme_io": false, 00:19:46.259 "nvme_io_md": false, 00:19:46.259 "write_zeroes": true, 00:19:46.259 "zcopy": true, 00:19:46.259 "get_zone_info": false, 00:19:46.259 "zone_management": false, 00:19:46.259 "zone_append": false, 00:19:46.259 "compare": false, 00:19:46.259 "compare_and_write": false, 00:19:46.259 "abort": true, 00:19:46.259 "seek_hole": false, 00:19:46.259 "seek_data": false, 00:19:46.259 "copy": true, 00:19:46.259 "nvme_iov_md": false 00:19:46.259 }, 00:19:46.259 "memory_domains": [ 00:19:46.259 { 00:19:46.259 "dma_device_id": "system", 00:19:46.259 "dma_device_type": 1 00:19:46.259 }, 00:19:46.259 { 00:19:46.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.259 "dma_device_type": 2 00:19:46.259 } 00:19:46.259 ], 00:19:46.259 "driver_specific": {} 00:19:46.259 } 00:19:46.259 ] 00:19:46.259 18:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:46.259 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:46.259 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:46.259 18:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:46.518 BaseBdev3 00:19:46.518 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:46.518 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:46.518 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:46.518 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:46.518 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:46.518 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:46.518 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.777 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:47.037 [ 00:19:47.037 { 00:19:47.037 "name": "BaseBdev3", 00:19:47.037 "aliases": [ 00:19:47.037 "74ccb8d8-0817-486b-8193-23e65a715793" 00:19:47.037 ], 00:19:47.037 "product_name": "Malloc disk", 00:19:47.037 "block_size": 512, 00:19:47.037 "num_blocks": 65536, 00:19:47.037 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:47.037 "assigned_rate_limits": { 00:19:47.037 "rw_ios_per_sec": 0, 00:19:47.037 "rw_mbytes_per_sec": 0, 00:19:47.037 "r_mbytes_per_sec": 0, 00:19:47.037 "w_mbytes_per_sec": 0 00:19:47.037 }, 00:19:47.037 "claimed": false, 00:19:47.037 "zoned": false, 00:19:47.037 "supported_io_types": { 00:19:47.037 "read": true, 00:19:47.037 "write": true, 00:19:47.037 "unmap": true, 00:19:47.037 "flush": true, 00:19:47.037 "reset": true, 00:19:47.037 "nvme_admin": false, 00:19:47.037 "nvme_io": false, 00:19:47.037 "nvme_io_md": false, 00:19:47.037 "write_zeroes": true, 00:19:47.037 "zcopy": true, 00:19:47.037 "get_zone_info": false, 00:19:47.037 "zone_management": false, 00:19:47.037 "zone_append": false, 00:19:47.037 "compare": false, 00:19:47.037 "compare_and_write": false, 00:19:47.037 "abort": true, 00:19:47.037 "seek_hole": false, 00:19:47.037 "seek_data": false, 00:19:47.037 "copy": true, 00:19:47.037 "nvme_iov_md": false 00:19:47.037 }, 00:19:47.037 "memory_domains": [ 00:19:47.037 { 00:19:47.037 "dma_device_id": "system", 00:19:47.037 "dma_device_type": 1 00:19:47.037 }, 00:19:47.037 { 00:19:47.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.037 "dma_device_type": 2 00:19:47.037 } 00:19:47.037 ], 00:19:47.037 "driver_specific": {} 00:19:47.037 } 00:19:47.037 ] 00:19:47.037 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:47.037 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:47.037 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:47.037 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:47.037 [2024-07-25 18:46:47.601473] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.037 [2024-07-25 18:46:47.602211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.037 [2024-07-25 18:46:47.602294] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.037 [2024-07-25 18:46:47.604620] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.296 "name": "Existed_Raid", 00:19:47.296 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:47.296 "strip_size_kb": 64, 00:19:47.296 "state": "configuring", 00:19:47.296 "raid_level": "concat", 00:19:47.296 "superblock": true, 00:19:47.296 "num_base_bdevs": 3, 00:19:47.296 "num_base_bdevs_discovered": 2, 00:19:47.296 "num_base_bdevs_operational": 3, 00:19:47.296 "base_bdevs_list": [ 00:19:47.296 { 00:19:47.296 "name": "BaseBdev1", 00:19:47.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.296 "is_configured": false, 00:19:47.296 "data_offset": 0, 00:19:47.296 "data_size": 0 00:19:47.296 }, 00:19:47.296 { 00:19:47.296 "name": "BaseBdev2", 00:19:47.296 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:47.296 "is_configured": true, 00:19:47.296 "data_offset": 2048, 00:19:47.296 "data_size": 63488 00:19:47.296 }, 00:19:47.296 { 00:19:47.296 "name": "BaseBdev3", 00:19:47.296 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:47.296 "is_configured": true, 00:19:47.296 "data_offset": 2048, 00:19:47.296 "data_size": 63488 00:19:47.296 } 00:19:47.296 ] 00:19:47.296 }' 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.296 18:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.863 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:48.121 [2024-07-25 18:46:48.653621] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.121 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.687 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.687 "name": "Existed_Raid", 00:19:48.687 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:48.687 "strip_size_kb": 64, 00:19:48.688 "state": "configuring", 00:19:48.688 "raid_level": "concat", 00:19:48.688 "superblock": true, 00:19:48.688 "num_base_bdevs": 3, 00:19:48.688 "num_base_bdevs_discovered": 1, 00:19:48.688 "num_base_bdevs_operational": 3, 00:19:48.688 "base_bdevs_list": [ 00:19:48.688 { 00:19:48.688 "name": "BaseBdev1", 00:19:48.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.688 "is_configured": false, 00:19:48.688 "data_offset": 0, 00:19:48.688 "data_size": 0 00:19:48.688 }, 00:19:48.688 { 00:19:48.688 "name": null, 00:19:48.688 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:48.688 "is_configured": false, 00:19:48.688 "data_offset": 2048, 00:19:48.688 "data_size": 63488 00:19:48.688 }, 00:19:48.688 { 00:19:48.688 "name": "BaseBdev3", 00:19:48.688 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:48.688 "is_configured": true, 00:19:48.688 "data_offset": 2048, 00:19:48.688 "data_size": 63488 00:19:48.688 } 00:19:48.688 ] 00:19:48.688 }' 00:19:48.688 18:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.688 18:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.253 18:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:49.253 18:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.512 18:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:49.512 18:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:49.770 [2024-07-25 18:46:50.119003] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:49.770 BaseBdev1 00:19:49.770 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:49.770 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:49.770 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:49.770 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:49.770 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:49.770 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:49.770 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.054 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:50.054 [ 00:19:50.054 { 00:19:50.054 "name": "BaseBdev1", 00:19:50.054 "aliases": [ 00:19:50.054 "bca95fd0-31ff-4d70-a3d6-be71a1dd429c" 00:19:50.054 ], 00:19:50.054 "product_name": "Malloc disk", 00:19:50.054 "block_size": 512, 00:19:50.054 "num_blocks": 65536, 00:19:50.054 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:50.054 "assigned_rate_limits": { 00:19:50.054 "rw_ios_per_sec": 0, 00:19:50.054 "rw_mbytes_per_sec": 0, 00:19:50.054 "r_mbytes_per_sec": 0, 00:19:50.054 "w_mbytes_per_sec": 0 00:19:50.054 }, 00:19:50.054 "claimed": true, 00:19:50.054 "claim_type": "exclusive_write", 00:19:50.054 "zoned": false, 00:19:50.054 "supported_io_types": { 00:19:50.054 "read": true, 00:19:50.054 "write": true, 00:19:50.054 "unmap": true, 00:19:50.054 "flush": true, 00:19:50.054 "reset": true, 00:19:50.054 "nvme_admin": false, 00:19:50.054 "nvme_io": false, 00:19:50.054 "nvme_io_md": false, 00:19:50.054 "write_zeroes": true, 00:19:50.054 "zcopy": true, 00:19:50.054 "get_zone_info": false, 00:19:50.054 "zone_management": false, 00:19:50.054 "zone_append": false, 00:19:50.054 "compare": false, 00:19:50.054 "compare_and_write": false, 00:19:50.054 "abort": true, 00:19:50.054 "seek_hole": false, 00:19:50.054 "seek_data": false, 00:19:50.054 "copy": true, 00:19:50.054 "nvme_iov_md": false 00:19:50.054 }, 00:19:50.054 "memory_domains": [ 00:19:50.054 { 00:19:50.054 "dma_device_id": "system", 00:19:50.054 "dma_device_type": 1 00:19:50.054 }, 00:19:50.054 { 00:19:50.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.054 "dma_device_type": 2 00:19:50.054 } 00:19:50.054 ], 00:19:50.054 "driver_specific": {} 00:19:50.054 } 00:19:50.054 ] 00:19:50.054 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:50.054 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:50.054 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.054 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:50.054 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:50.054 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:50.055 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:50.055 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.055 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.055 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.055 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.055 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.055 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.341 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.341 "name": "Existed_Raid", 00:19:50.341 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:50.341 "strip_size_kb": 64, 00:19:50.341 "state": "configuring", 00:19:50.341 "raid_level": "concat", 00:19:50.341 "superblock": true, 00:19:50.341 "num_base_bdevs": 3, 00:19:50.341 "num_base_bdevs_discovered": 2, 00:19:50.341 "num_base_bdevs_operational": 3, 00:19:50.341 "base_bdevs_list": [ 00:19:50.341 { 00:19:50.341 "name": "BaseBdev1", 00:19:50.341 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:50.341 "is_configured": true, 00:19:50.341 "data_offset": 2048, 00:19:50.341 "data_size": 63488 00:19:50.341 }, 00:19:50.341 { 00:19:50.341 "name": null, 00:19:50.341 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:50.341 "is_configured": false, 00:19:50.341 "data_offset": 2048, 00:19:50.341 "data_size": 63488 00:19:50.341 }, 00:19:50.341 { 00:19:50.341 "name": "BaseBdev3", 00:19:50.341 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:50.341 "is_configured": true, 00:19:50.341 "data_offset": 2048, 00:19:50.341 "data_size": 63488 00:19:50.341 } 00:19:50.341 ] 00:19:50.341 }' 00:19:50.341 18:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.341 18:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.908 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.908 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:51.166 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:51.166 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:51.424 [2024-07-25 18:46:51.827547] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.424 18:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.683 18:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.683 "name": "Existed_Raid", 00:19:51.683 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:51.683 "strip_size_kb": 64, 00:19:51.683 "state": "configuring", 00:19:51.683 "raid_level": "concat", 00:19:51.683 "superblock": true, 00:19:51.683 "num_base_bdevs": 3, 00:19:51.683 "num_base_bdevs_discovered": 1, 00:19:51.683 "num_base_bdevs_operational": 3, 00:19:51.683 "base_bdevs_list": [ 00:19:51.683 { 00:19:51.683 "name": "BaseBdev1", 00:19:51.683 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:51.683 "is_configured": true, 00:19:51.683 "data_offset": 2048, 00:19:51.683 "data_size": 63488 00:19:51.683 }, 00:19:51.683 { 00:19:51.683 "name": null, 00:19:51.683 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:51.683 "is_configured": false, 00:19:51.683 "data_offset": 2048, 00:19:51.683 "data_size": 63488 00:19:51.683 }, 00:19:51.683 { 00:19:51.683 "name": null, 00:19:51.683 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:51.683 "is_configured": false, 00:19:51.683 "data_offset": 2048, 00:19:51.683 "data_size": 63488 00:19:51.683 } 00:19:51.683 ] 00:19:51.683 }' 00:19:51.683 18:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.683 18:46:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.250 18:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:52.250 18:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.250 18:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:52.250 18:46:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:52.509 [2024-07-25 18:46:53.047801] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.509 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.767 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.767 "name": "Existed_Raid", 00:19:52.767 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:52.767 "strip_size_kb": 64, 00:19:52.767 "state": "configuring", 00:19:52.767 "raid_level": "concat", 00:19:52.767 "superblock": true, 00:19:52.767 "num_base_bdevs": 3, 00:19:52.767 "num_base_bdevs_discovered": 2, 00:19:52.767 "num_base_bdevs_operational": 3, 00:19:52.767 "base_bdevs_list": [ 00:19:52.767 { 00:19:52.767 "name": "BaseBdev1", 00:19:52.767 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:52.767 "is_configured": true, 00:19:52.767 "data_offset": 2048, 00:19:52.767 "data_size": 63488 00:19:52.767 }, 00:19:52.767 { 00:19:52.767 "name": null, 00:19:52.768 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:52.768 "is_configured": false, 00:19:52.768 "data_offset": 2048, 00:19:52.768 "data_size": 63488 00:19:52.768 }, 00:19:52.768 { 00:19:52.768 "name": "BaseBdev3", 00:19:52.768 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:52.768 "is_configured": true, 00:19:52.768 "data_offset": 2048, 00:19:52.768 "data_size": 63488 00:19:52.768 } 00:19:52.768 ] 00:19:52.768 }' 00:19:52.768 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.768 18:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.331 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.332 18:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:53.590 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:53.590 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:53.847 [2024-07-25 18:46:54.184059] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.847 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.848 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.105 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.105 "name": "Existed_Raid", 00:19:54.105 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:54.105 "strip_size_kb": 64, 00:19:54.105 "state": "configuring", 00:19:54.105 "raid_level": "concat", 00:19:54.105 "superblock": true, 00:19:54.105 "num_base_bdevs": 3, 00:19:54.105 "num_base_bdevs_discovered": 1, 00:19:54.105 "num_base_bdevs_operational": 3, 00:19:54.105 "base_bdevs_list": [ 00:19:54.105 { 00:19:54.105 "name": null, 00:19:54.105 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:54.105 "is_configured": false, 00:19:54.105 "data_offset": 2048, 00:19:54.105 "data_size": 63488 00:19:54.105 }, 00:19:54.105 { 00:19:54.105 "name": null, 00:19:54.105 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:54.105 "is_configured": false, 00:19:54.105 "data_offset": 2048, 00:19:54.105 "data_size": 63488 00:19:54.105 }, 00:19:54.105 { 00:19:54.105 "name": "BaseBdev3", 00:19:54.105 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:54.105 "is_configured": true, 00:19:54.105 "data_offset": 2048, 00:19:54.105 "data_size": 63488 00:19:54.105 } 00:19:54.105 ] 00:19:54.105 }' 00:19:54.105 18:46:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.105 18:46:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.669 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.669 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:54.926 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:54.926 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:55.184 [2024-07-25 18:46:55.548422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.184 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.442 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.442 "name": "Existed_Raid", 00:19:55.442 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:55.442 "strip_size_kb": 64, 00:19:55.442 "state": "configuring", 00:19:55.442 "raid_level": "concat", 00:19:55.442 "superblock": true, 00:19:55.442 "num_base_bdevs": 3, 00:19:55.442 "num_base_bdevs_discovered": 2, 00:19:55.442 "num_base_bdevs_operational": 3, 00:19:55.442 "base_bdevs_list": [ 00:19:55.442 { 00:19:55.442 "name": null, 00:19:55.442 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:55.442 "is_configured": false, 00:19:55.442 "data_offset": 2048, 00:19:55.442 "data_size": 63488 00:19:55.442 }, 00:19:55.442 { 00:19:55.442 "name": "BaseBdev2", 00:19:55.442 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:55.442 "is_configured": true, 00:19:55.442 "data_offset": 2048, 00:19:55.443 "data_size": 63488 00:19:55.443 }, 00:19:55.443 { 00:19:55.443 "name": "BaseBdev3", 00:19:55.443 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:55.443 "is_configured": true, 00:19:55.443 "data_offset": 2048, 00:19:55.443 "data_size": 63488 00:19:55.443 } 00:19:55.443 ] 00:19:55.443 }' 00:19:55.443 18:46:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.443 18:46:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.008 18:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.008 18:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:56.266 18:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:56.266 18:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.266 18:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:56.525 18:46:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bca95fd0-31ff-4d70-a3d6-be71a1dd429c 00:19:56.525 [2024-07-25 18:46:57.093767] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:56.525 [2024-07-25 18:46:57.094071] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:19:56.525 [2024-07-25 18:46:57.094084] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:56.525 [2024-07-25 18:46:57.094197] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:56.525 [2024-07-25 18:46:57.094536] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:19:56.525 [2024-07-25 18:46:57.094547] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:19:56.525 [2024-07-25 18:46:57.094683] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.525 NewBaseBdev 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:56.784 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:57.043 [ 00:19:57.043 { 00:19:57.043 "name": "NewBaseBdev", 00:19:57.043 "aliases": [ 00:19:57.043 "bca95fd0-31ff-4d70-a3d6-be71a1dd429c" 00:19:57.043 ], 00:19:57.043 "product_name": "Malloc disk", 00:19:57.043 "block_size": 512, 00:19:57.043 "num_blocks": 65536, 00:19:57.043 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:57.043 "assigned_rate_limits": { 00:19:57.043 "rw_ios_per_sec": 0, 00:19:57.043 "rw_mbytes_per_sec": 0, 00:19:57.043 "r_mbytes_per_sec": 0, 00:19:57.043 "w_mbytes_per_sec": 0 00:19:57.043 }, 00:19:57.043 "claimed": true, 00:19:57.043 "claim_type": "exclusive_write", 00:19:57.043 "zoned": false, 00:19:57.043 "supported_io_types": { 00:19:57.043 "read": true, 00:19:57.043 "write": true, 00:19:57.043 "unmap": true, 00:19:57.043 "flush": true, 00:19:57.043 "reset": true, 00:19:57.043 "nvme_admin": false, 00:19:57.043 "nvme_io": false, 00:19:57.043 "nvme_io_md": false, 00:19:57.043 "write_zeroes": true, 00:19:57.043 "zcopy": true, 00:19:57.043 "get_zone_info": false, 00:19:57.043 "zone_management": false, 00:19:57.043 "zone_append": false, 00:19:57.043 "compare": false, 00:19:57.043 "compare_and_write": false, 00:19:57.043 "abort": true, 00:19:57.043 "seek_hole": false, 00:19:57.043 "seek_data": false, 00:19:57.043 "copy": true, 00:19:57.043 "nvme_iov_md": false 00:19:57.043 }, 00:19:57.043 "memory_domains": [ 00:19:57.043 { 00:19:57.043 "dma_device_id": "system", 00:19:57.043 "dma_device_type": 1 00:19:57.043 }, 00:19:57.043 { 00:19:57.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.043 "dma_device_type": 2 00:19:57.043 } 00:19:57.043 ], 00:19:57.043 "driver_specific": {} 00:19:57.043 } 00:19:57.043 ] 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.043 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.302 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.302 "name": "Existed_Raid", 00:19:57.302 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:57.302 "strip_size_kb": 64, 00:19:57.302 "state": "online", 00:19:57.302 "raid_level": "concat", 00:19:57.302 "superblock": true, 00:19:57.302 "num_base_bdevs": 3, 00:19:57.302 "num_base_bdevs_discovered": 3, 00:19:57.302 "num_base_bdevs_operational": 3, 00:19:57.302 "base_bdevs_list": [ 00:19:57.302 { 00:19:57.302 "name": "NewBaseBdev", 00:19:57.302 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:57.302 "is_configured": true, 00:19:57.302 "data_offset": 2048, 00:19:57.302 "data_size": 63488 00:19:57.302 }, 00:19:57.302 { 00:19:57.302 "name": "BaseBdev2", 00:19:57.302 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:57.302 "is_configured": true, 00:19:57.302 "data_offset": 2048, 00:19:57.302 "data_size": 63488 00:19:57.302 }, 00:19:57.302 { 00:19:57.302 "name": "BaseBdev3", 00:19:57.302 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:57.302 "is_configured": true, 00:19:57.302 "data_offset": 2048, 00:19:57.302 "data_size": 63488 00:19:57.302 } 00:19:57.302 ] 00:19:57.302 }' 00:19:57.302 18:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.302 18:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.869 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:57.869 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:57.869 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:57.870 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:57.870 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:57.870 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:57.870 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:57.870 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:58.129 [2024-07-25 18:46:58.582424] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:58.129 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:58.129 "name": "Existed_Raid", 00:19:58.129 "aliases": [ 00:19:58.129 "1297d57e-70df-4b78-a0d6-9b3bfe3ac254" 00:19:58.129 ], 00:19:58.129 "product_name": "Raid Volume", 00:19:58.129 "block_size": 512, 00:19:58.129 "num_blocks": 190464, 00:19:58.129 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:58.129 "assigned_rate_limits": { 00:19:58.129 "rw_ios_per_sec": 0, 00:19:58.129 "rw_mbytes_per_sec": 0, 00:19:58.129 "r_mbytes_per_sec": 0, 00:19:58.129 "w_mbytes_per_sec": 0 00:19:58.129 }, 00:19:58.129 "claimed": false, 00:19:58.129 "zoned": false, 00:19:58.129 "supported_io_types": { 00:19:58.129 "read": true, 00:19:58.129 "write": true, 00:19:58.129 "unmap": true, 00:19:58.129 "flush": true, 00:19:58.129 "reset": true, 00:19:58.129 "nvme_admin": false, 00:19:58.129 "nvme_io": false, 00:19:58.129 "nvme_io_md": false, 00:19:58.129 "write_zeroes": true, 00:19:58.129 "zcopy": false, 00:19:58.129 "get_zone_info": false, 00:19:58.129 "zone_management": false, 00:19:58.129 "zone_append": false, 00:19:58.129 "compare": false, 00:19:58.129 "compare_and_write": false, 00:19:58.129 "abort": false, 00:19:58.129 "seek_hole": false, 00:19:58.129 "seek_data": false, 00:19:58.129 "copy": false, 00:19:58.129 "nvme_iov_md": false 00:19:58.129 }, 00:19:58.129 "memory_domains": [ 00:19:58.129 { 00:19:58.129 "dma_device_id": "system", 00:19:58.129 "dma_device_type": 1 00:19:58.129 }, 00:19:58.129 { 00:19:58.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.129 "dma_device_type": 2 00:19:58.129 }, 00:19:58.129 { 00:19:58.129 "dma_device_id": "system", 00:19:58.129 "dma_device_type": 1 00:19:58.129 }, 00:19:58.129 { 00:19:58.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.129 "dma_device_type": 2 00:19:58.129 }, 00:19:58.129 { 00:19:58.129 "dma_device_id": "system", 00:19:58.129 "dma_device_type": 1 00:19:58.129 }, 00:19:58.129 { 00:19:58.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.129 "dma_device_type": 2 00:19:58.129 } 00:19:58.129 ], 00:19:58.129 "driver_specific": { 00:19:58.129 "raid": { 00:19:58.129 "uuid": "1297d57e-70df-4b78-a0d6-9b3bfe3ac254", 00:19:58.129 "strip_size_kb": 64, 00:19:58.129 "state": "online", 00:19:58.129 "raid_level": "concat", 00:19:58.129 "superblock": true, 00:19:58.129 "num_base_bdevs": 3, 00:19:58.129 "num_base_bdevs_discovered": 3, 00:19:58.129 "num_base_bdevs_operational": 3, 00:19:58.129 "base_bdevs_list": [ 00:19:58.129 { 00:19:58.129 "name": "NewBaseBdev", 00:19:58.129 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:58.129 "is_configured": true, 00:19:58.129 "data_offset": 2048, 00:19:58.129 "data_size": 63488 00:19:58.129 }, 00:19:58.129 { 00:19:58.129 "name": "BaseBdev2", 00:19:58.129 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:58.129 "is_configured": true, 00:19:58.129 "data_offset": 2048, 00:19:58.129 "data_size": 63488 00:19:58.129 }, 00:19:58.129 { 00:19:58.129 "name": "BaseBdev3", 00:19:58.129 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:58.129 "is_configured": true, 00:19:58.129 "data_offset": 2048, 00:19:58.129 "data_size": 63488 00:19:58.129 } 00:19:58.129 ] 00:19:58.129 } 00:19:58.129 } 00:19:58.129 }' 00:19:58.129 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:58.129 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:58.129 BaseBdev2 00:19:58.129 BaseBdev3' 00:19:58.129 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:58.129 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:58.129 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:58.388 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:58.388 "name": "NewBaseBdev", 00:19:58.388 "aliases": [ 00:19:58.388 "bca95fd0-31ff-4d70-a3d6-be71a1dd429c" 00:19:58.388 ], 00:19:58.388 "product_name": "Malloc disk", 00:19:58.388 "block_size": 512, 00:19:58.388 "num_blocks": 65536, 00:19:58.388 "uuid": "bca95fd0-31ff-4d70-a3d6-be71a1dd429c", 00:19:58.388 "assigned_rate_limits": { 00:19:58.388 "rw_ios_per_sec": 0, 00:19:58.388 "rw_mbytes_per_sec": 0, 00:19:58.388 "r_mbytes_per_sec": 0, 00:19:58.388 "w_mbytes_per_sec": 0 00:19:58.388 }, 00:19:58.388 "claimed": true, 00:19:58.388 "claim_type": "exclusive_write", 00:19:58.388 "zoned": false, 00:19:58.388 "supported_io_types": { 00:19:58.388 "read": true, 00:19:58.388 "write": true, 00:19:58.388 "unmap": true, 00:19:58.388 "flush": true, 00:19:58.388 "reset": true, 00:19:58.388 "nvme_admin": false, 00:19:58.388 "nvme_io": false, 00:19:58.388 "nvme_io_md": false, 00:19:58.388 "write_zeroes": true, 00:19:58.388 "zcopy": true, 00:19:58.388 "get_zone_info": false, 00:19:58.388 "zone_management": false, 00:19:58.388 "zone_append": false, 00:19:58.388 "compare": false, 00:19:58.388 "compare_and_write": false, 00:19:58.388 "abort": true, 00:19:58.388 "seek_hole": false, 00:19:58.388 "seek_data": false, 00:19:58.388 "copy": true, 00:19:58.388 "nvme_iov_md": false 00:19:58.388 }, 00:19:58.388 "memory_domains": [ 00:19:58.388 { 00:19:58.388 "dma_device_id": "system", 00:19:58.388 "dma_device_type": 1 00:19:58.388 }, 00:19:58.388 { 00:19:58.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.388 "dma_device_type": 2 00:19:58.388 } 00:19:58.388 ], 00:19:58.388 "driver_specific": {} 00:19:58.388 }' 00:19:58.388 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.388 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.388 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:58.388 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.388 18:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:58.646 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:58.904 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:58.904 "name": "BaseBdev2", 00:19:58.904 "aliases": [ 00:19:58.904 "c17b3654-2e3f-412d-bf17-ee2b1a9a8554" 00:19:58.904 ], 00:19:58.904 "product_name": "Malloc disk", 00:19:58.904 "block_size": 512, 00:19:58.904 "num_blocks": 65536, 00:19:58.904 "uuid": "c17b3654-2e3f-412d-bf17-ee2b1a9a8554", 00:19:58.904 "assigned_rate_limits": { 00:19:58.904 "rw_ios_per_sec": 0, 00:19:58.904 "rw_mbytes_per_sec": 0, 00:19:58.904 "r_mbytes_per_sec": 0, 00:19:58.904 "w_mbytes_per_sec": 0 00:19:58.904 }, 00:19:58.904 "claimed": true, 00:19:58.904 "claim_type": "exclusive_write", 00:19:58.904 "zoned": false, 00:19:58.904 "supported_io_types": { 00:19:58.904 "read": true, 00:19:58.904 "write": true, 00:19:58.904 "unmap": true, 00:19:58.904 "flush": true, 00:19:58.904 "reset": true, 00:19:58.904 "nvme_admin": false, 00:19:58.904 "nvme_io": false, 00:19:58.904 "nvme_io_md": false, 00:19:58.904 "write_zeroes": true, 00:19:58.904 "zcopy": true, 00:19:58.904 "get_zone_info": false, 00:19:58.904 "zone_management": false, 00:19:58.904 "zone_append": false, 00:19:58.904 "compare": false, 00:19:58.904 "compare_and_write": false, 00:19:58.904 "abort": true, 00:19:58.904 "seek_hole": false, 00:19:58.904 "seek_data": false, 00:19:58.904 "copy": true, 00:19:58.904 "nvme_iov_md": false 00:19:58.904 }, 00:19:58.904 "memory_domains": [ 00:19:58.904 { 00:19:58.904 "dma_device_id": "system", 00:19:58.904 "dma_device_type": 1 00:19:58.904 }, 00:19:58.904 { 00:19:58.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.904 "dma_device_type": 2 00:19:58.904 } 00:19:58.904 ], 00:19:58.904 "driver_specific": {} 00:19:58.904 }' 00:19:58.904 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.904 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:58.904 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:58.904 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:59.163 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:59.163 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:59.163 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:59.163 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:59.163 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:59.163 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:59.163 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:59.421 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:59.421 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:59.421 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:59.421 18:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:59.680 "name": "BaseBdev3", 00:19:59.680 "aliases": [ 00:19:59.680 "74ccb8d8-0817-486b-8193-23e65a715793" 00:19:59.680 ], 00:19:59.680 "product_name": "Malloc disk", 00:19:59.680 "block_size": 512, 00:19:59.680 "num_blocks": 65536, 00:19:59.680 "uuid": "74ccb8d8-0817-486b-8193-23e65a715793", 00:19:59.680 "assigned_rate_limits": { 00:19:59.680 "rw_ios_per_sec": 0, 00:19:59.680 "rw_mbytes_per_sec": 0, 00:19:59.680 "r_mbytes_per_sec": 0, 00:19:59.680 "w_mbytes_per_sec": 0 00:19:59.680 }, 00:19:59.680 "claimed": true, 00:19:59.680 "claim_type": "exclusive_write", 00:19:59.680 "zoned": false, 00:19:59.680 "supported_io_types": { 00:19:59.680 "read": true, 00:19:59.680 "write": true, 00:19:59.680 "unmap": true, 00:19:59.680 "flush": true, 00:19:59.680 "reset": true, 00:19:59.680 "nvme_admin": false, 00:19:59.680 "nvme_io": false, 00:19:59.680 "nvme_io_md": false, 00:19:59.680 "write_zeroes": true, 00:19:59.680 "zcopy": true, 00:19:59.680 "get_zone_info": false, 00:19:59.680 "zone_management": false, 00:19:59.680 "zone_append": false, 00:19:59.680 "compare": false, 00:19:59.680 "compare_and_write": false, 00:19:59.680 "abort": true, 00:19:59.680 "seek_hole": false, 00:19:59.680 "seek_data": false, 00:19:59.680 "copy": true, 00:19:59.680 "nvme_iov_md": false 00:19:59.680 }, 00:19:59.680 "memory_domains": [ 00:19:59.680 { 00:19:59.680 "dma_device_id": "system", 00:19:59.680 "dma_device_type": 1 00:19:59.680 }, 00:19:59.680 { 00:19:59.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.680 "dma_device_type": 2 00:19:59.680 } 00:19:59.680 ], 00:19:59.680 "driver_specific": {} 00:19:59.680 }' 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:59.680 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:59.967 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:59.967 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:59.968 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:59.968 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:59.968 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:59.968 [2024-07-25 18:47:00.534335] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.968 [2024-07-25 18:47:00.534376] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.968 [2024-07-25 18:47:00.534474] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.968 [2024-07-25 18:47:00.534544] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.968 [2024-07-25 18:47:00.534554] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 128532 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 128532 ']' 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 128532 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 128532 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 128532' 00:20:00.226 killing process with pid 128532 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 128532 00:20:00.226 [2024-07-25 18:47:00.580057] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.226 18:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 128532 00:20:00.484 [2024-07-25 18:47:00.837694] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.859 ************************************ 00:20:01.859 END TEST raid_state_function_test_sb 00:20:01.859 ************************************ 00:20:01.859 18:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:01.859 00:20:01.859 real 0m28.539s 00:20:01.859 user 0m50.746s 00:20:01.859 sys 0m5.185s 00:20:01.859 18:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.859 18:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.859 18:47:02 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:20:01.859 18:47:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:01.859 18:47:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.859 18:47:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.859 ************************************ 00:20:01.859 START TEST raid_superblock_test 00:20:01.859 ************************************ 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=129500 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 129500 /var/tmp/spdk-raid.sock 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 129500 ']' 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:01.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.859 18:47:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.859 [2024-07-25 18:47:02.199217] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:01.859 [2024-07-25 18:47:02.199456] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129500 ] 00:20:01.859 [2024-07-25 18:47:02.387726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.117 [2024-07-25 18:47:02.659098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.376 [2024-07-25 18:47:02.849883] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.635 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:02.894 malloc1 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:02.894 [2024-07-25 18:47:03.422048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:02.894 [2024-07-25 18:47:03.422255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.894 [2024-07-25 18:47:03.422322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:02.894 [2024-07-25 18:47:03.422420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.894 [2024-07-25 18:47:03.424809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.894 [2024-07-25 18:47:03.424983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:02.894 pt1 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.894 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:03.152 malloc2 00:20:03.152 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:03.410 [2024-07-25 18:47:03.913734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:03.410 [2024-07-25 18:47:03.914042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.410 [2024-07-25 18:47:03.914200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:03.410 [2024-07-25 18:47:03.914310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.410 [2024-07-25 18:47:03.917097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.410 [2024-07-25 18:47:03.917270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:03.410 pt2 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:03.410 18:47:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:03.668 malloc3 00:20:03.668 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:03.927 [2024-07-25 18:47:04.388489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:03.927 [2024-07-25 18:47:04.388751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.927 [2024-07-25 18:47:04.388849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:03.927 [2024-07-25 18:47:04.389152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.927 [2024-07-25 18:47:04.391874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.927 [2024-07-25 18:47:04.392068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:03.927 pt3 00:20:03.927 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:03.927 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:03.927 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:04.187 [2024-07-25 18:47:04.564612] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:04.187 [2024-07-25 18:47:04.567028] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:04.187 [2024-07-25 18:47:04.567252] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:04.187 [2024-07-25 18:47:04.567499] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:20:04.187 [2024-07-25 18:47:04.567604] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:04.187 [2024-07-25 18:47:04.567808] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:04.187 [2024-07-25 18:47:04.568382] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:20:04.187 [2024-07-25 18:47:04.568494] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:20:04.187 [2024-07-25 18:47:04.568853] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.187 "name": "raid_bdev1", 00:20:04.187 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:04.187 "strip_size_kb": 64, 00:20:04.187 "state": "online", 00:20:04.187 "raid_level": "concat", 00:20:04.187 "superblock": true, 00:20:04.187 "num_base_bdevs": 3, 00:20:04.187 "num_base_bdevs_discovered": 3, 00:20:04.187 "num_base_bdevs_operational": 3, 00:20:04.187 "base_bdevs_list": [ 00:20:04.187 { 00:20:04.187 "name": "pt1", 00:20:04.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.187 "is_configured": true, 00:20:04.187 "data_offset": 2048, 00:20:04.187 "data_size": 63488 00:20:04.187 }, 00:20:04.187 { 00:20:04.187 "name": "pt2", 00:20:04.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.187 "is_configured": true, 00:20:04.187 "data_offset": 2048, 00:20:04.187 "data_size": 63488 00:20:04.187 }, 00:20:04.187 { 00:20:04.187 "name": "pt3", 00:20:04.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:04.187 "is_configured": true, 00:20:04.187 "data_offset": 2048, 00:20:04.187 "data_size": 63488 00:20:04.187 } 00:20:04.187 ] 00:20:04.187 }' 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.187 18:47:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:04.755 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:05.014 [2024-07-25 18:47:05.525216] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.014 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:05.014 "name": "raid_bdev1", 00:20:05.014 "aliases": [ 00:20:05.014 "0bbd291f-cd43-4ca8-8071-4940120e1dfa" 00:20:05.014 ], 00:20:05.014 "product_name": "Raid Volume", 00:20:05.014 "block_size": 512, 00:20:05.014 "num_blocks": 190464, 00:20:05.014 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:05.014 "assigned_rate_limits": { 00:20:05.014 "rw_ios_per_sec": 0, 00:20:05.014 "rw_mbytes_per_sec": 0, 00:20:05.014 "r_mbytes_per_sec": 0, 00:20:05.014 "w_mbytes_per_sec": 0 00:20:05.014 }, 00:20:05.014 "claimed": false, 00:20:05.014 "zoned": false, 00:20:05.014 "supported_io_types": { 00:20:05.014 "read": true, 00:20:05.014 "write": true, 00:20:05.014 "unmap": true, 00:20:05.014 "flush": true, 00:20:05.014 "reset": true, 00:20:05.014 "nvme_admin": false, 00:20:05.014 "nvme_io": false, 00:20:05.014 "nvme_io_md": false, 00:20:05.014 "write_zeroes": true, 00:20:05.014 "zcopy": false, 00:20:05.014 "get_zone_info": false, 00:20:05.014 "zone_management": false, 00:20:05.014 "zone_append": false, 00:20:05.014 "compare": false, 00:20:05.014 "compare_and_write": false, 00:20:05.014 "abort": false, 00:20:05.014 "seek_hole": false, 00:20:05.014 "seek_data": false, 00:20:05.014 "copy": false, 00:20:05.014 "nvme_iov_md": false 00:20:05.014 }, 00:20:05.014 "memory_domains": [ 00:20:05.014 { 00:20:05.014 "dma_device_id": "system", 00:20:05.014 "dma_device_type": 1 00:20:05.014 }, 00:20:05.014 { 00:20:05.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.014 "dma_device_type": 2 00:20:05.014 }, 00:20:05.014 { 00:20:05.014 "dma_device_id": "system", 00:20:05.014 "dma_device_type": 1 00:20:05.014 }, 00:20:05.014 { 00:20:05.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.014 "dma_device_type": 2 00:20:05.014 }, 00:20:05.014 { 00:20:05.014 "dma_device_id": "system", 00:20:05.014 "dma_device_type": 1 00:20:05.014 }, 00:20:05.014 { 00:20:05.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.014 "dma_device_type": 2 00:20:05.014 } 00:20:05.014 ], 00:20:05.014 "driver_specific": { 00:20:05.014 "raid": { 00:20:05.014 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:05.014 "strip_size_kb": 64, 00:20:05.014 "state": "online", 00:20:05.014 "raid_level": "concat", 00:20:05.014 "superblock": true, 00:20:05.014 "num_base_bdevs": 3, 00:20:05.014 "num_base_bdevs_discovered": 3, 00:20:05.015 "num_base_bdevs_operational": 3, 00:20:05.015 "base_bdevs_list": [ 00:20:05.015 { 00:20:05.015 "name": "pt1", 00:20:05.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:05.015 "is_configured": true, 00:20:05.015 "data_offset": 2048, 00:20:05.015 "data_size": 63488 00:20:05.015 }, 00:20:05.015 { 00:20:05.015 "name": "pt2", 00:20:05.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.015 "is_configured": true, 00:20:05.015 "data_offset": 2048, 00:20:05.015 "data_size": 63488 00:20:05.015 }, 00:20:05.015 { 00:20:05.015 "name": "pt3", 00:20:05.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:05.015 "is_configured": true, 00:20:05.015 "data_offset": 2048, 00:20:05.015 "data_size": 63488 00:20:05.015 } 00:20:05.015 ] 00:20:05.015 } 00:20:05.015 } 00:20:05.015 }' 00:20:05.015 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:05.015 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:05.015 pt2 00:20:05.015 pt3' 00:20:05.015 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.015 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:05.015 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.274 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.274 "name": "pt1", 00:20:05.274 "aliases": [ 00:20:05.274 "00000000-0000-0000-0000-000000000001" 00:20:05.274 ], 00:20:05.274 "product_name": "passthru", 00:20:05.274 "block_size": 512, 00:20:05.274 "num_blocks": 65536, 00:20:05.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:05.274 "assigned_rate_limits": { 00:20:05.274 "rw_ios_per_sec": 0, 00:20:05.274 "rw_mbytes_per_sec": 0, 00:20:05.274 "r_mbytes_per_sec": 0, 00:20:05.274 "w_mbytes_per_sec": 0 00:20:05.274 }, 00:20:05.274 "claimed": true, 00:20:05.274 "claim_type": "exclusive_write", 00:20:05.274 "zoned": false, 00:20:05.274 "supported_io_types": { 00:20:05.274 "read": true, 00:20:05.274 "write": true, 00:20:05.274 "unmap": true, 00:20:05.274 "flush": true, 00:20:05.274 "reset": true, 00:20:05.274 "nvme_admin": false, 00:20:05.274 "nvme_io": false, 00:20:05.274 "nvme_io_md": false, 00:20:05.274 "write_zeroes": true, 00:20:05.274 "zcopy": true, 00:20:05.274 "get_zone_info": false, 00:20:05.274 "zone_management": false, 00:20:05.274 "zone_append": false, 00:20:05.274 "compare": false, 00:20:05.274 "compare_and_write": false, 00:20:05.274 "abort": true, 00:20:05.274 "seek_hole": false, 00:20:05.274 "seek_data": false, 00:20:05.274 "copy": true, 00:20:05.274 "nvme_iov_md": false 00:20:05.274 }, 00:20:05.274 "memory_domains": [ 00:20:05.274 { 00:20:05.274 "dma_device_id": "system", 00:20:05.274 "dma_device_type": 1 00:20:05.274 }, 00:20:05.274 { 00:20:05.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.274 "dma_device_type": 2 00:20:05.274 } 00:20:05.274 ], 00:20:05.274 "driver_specific": { 00:20:05.274 "passthru": { 00:20:05.274 "name": "pt1", 00:20:05.274 "base_bdev_name": "malloc1" 00:20:05.274 } 00:20:05.274 } 00:20:05.274 }' 00:20:05.274 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.533 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:05.533 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:05.533 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.533 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:05.533 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:05.533 18:47:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.533 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:05.533 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:05.533 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.794 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:05.794 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:05.794 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:05.794 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:05.794 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:05.794 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:05.794 "name": "pt2", 00:20:05.794 "aliases": [ 00:20:05.794 "00000000-0000-0000-0000-000000000002" 00:20:05.794 ], 00:20:05.794 "product_name": "passthru", 00:20:05.794 "block_size": 512, 00:20:05.794 "num_blocks": 65536, 00:20:05.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.794 "assigned_rate_limits": { 00:20:05.794 "rw_ios_per_sec": 0, 00:20:05.794 "rw_mbytes_per_sec": 0, 00:20:05.794 "r_mbytes_per_sec": 0, 00:20:05.794 "w_mbytes_per_sec": 0 00:20:05.794 }, 00:20:05.794 "claimed": true, 00:20:05.794 "claim_type": "exclusive_write", 00:20:05.794 "zoned": false, 00:20:05.794 "supported_io_types": { 00:20:05.794 "read": true, 00:20:05.794 "write": true, 00:20:05.794 "unmap": true, 00:20:05.794 "flush": true, 00:20:05.794 "reset": true, 00:20:05.794 "nvme_admin": false, 00:20:05.794 "nvme_io": false, 00:20:05.794 "nvme_io_md": false, 00:20:05.794 "write_zeroes": true, 00:20:05.794 "zcopy": true, 00:20:05.794 "get_zone_info": false, 00:20:05.794 "zone_management": false, 00:20:05.794 "zone_append": false, 00:20:05.794 "compare": false, 00:20:05.794 "compare_and_write": false, 00:20:05.794 "abort": true, 00:20:05.794 "seek_hole": false, 00:20:05.794 "seek_data": false, 00:20:05.794 "copy": true, 00:20:05.794 "nvme_iov_md": false 00:20:05.794 }, 00:20:05.794 "memory_domains": [ 00:20:05.794 { 00:20:05.794 "dma_device_id": "system", 00:20:05.794 "dma_device_type": 1 00:20:05.794 }, 00:20:05.794 { 00:20:05.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.794 "dma_device_type": 2 00:20:05.794 } 00:20:05.794 ], 00:20:05.794 "driver_specific": { 00:20:05.794 "passthru": { 00:20:05.794 "name": "pt2", 00:20:05.794 "base_bdev_name": "malloc2" 00:20:05.794 } 00:20:05.794 } 00:20:05.794 }' 00:20:05.794 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:06.053 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.312 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.312 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:06.312 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:06.312 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:06.312 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:06.570 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:06.570 "name": "pt3", 00:20:06.570 "aliases": [ 00:20:06.570 "00000000-0000-0000-0000-000000000003" 00:20:06.570 ], 00:20:06.571 "product_name": "passthru", 00:20:06.571 "block_size": 512, 00:20:06.571 "num_blocks": 65536, 00:20:06.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:06.571 "assigned_rate_limits": { 00:20:06.571 "rw_ios_per_sec": 0, 00:20:06.571 "rw_mbytes_per_sec": 0, 00:20:06.571 "r_mbytes_per_sec": 0, 00:20:06.571 "w_mbytes_per_sec": 0 00:20:06.571 }, 00:20:06.571 "claimed": true, 00:20:06.571 "claim_type": "exclusive_write", 00:20:06.571 "zoned": false, 00:20:06.571 "supported_io_types": { 00:20:06.571 "read": true, 00:20:06.571 "write": true, 00:20:06.571 "unmap": true, 00:20:06.571 "flush": true, 00:20:06.571 "reset": true, 00:20:06.571 "nvme_admin": false, 00:20:06.571 "nvme_io": false, 00:20:06.571 "nvme_io_md": false, 00:20:06.571 "write_zeroes": true, 00:20:06.571 "zcopy": true, 00:20:06.571 "get_zone_info": false, 00:20:06.571 "zone_management": false, 00:20:06.571 "zone_append": false, 00:20:06.571 "compare": false, 00:20:06.571 "compare_and_write": false, 00:20:06.571 "abort": true, 00:20:06.571 "seek_hole": false, 00:20:06.571 "seek_data": false, 00:20:06.571 "copy": true, 00:20:06.571 "nvme_iov_md": false 00:20:06.571 }, 00:20:06.571 "memory_domains": [ 00:20:06.571 { 00:20:06.571 "dma_device_id": "system", 00:20:06.571 "dma_device_type": 1 00:20:06.571 }, 00:20:06.571 { 00:20:06.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.571 "dma_device_type": 2 00:20:06.571 } 00:20:06.571 ], 00:20:06.571 "driver_specific": { 00:20:06.571 "passthru": { 00:20:06.571 "name": "pt3", 00:20:06.571 "base_bdev_name": "malloc3" 00:20:06.571 } 00:20:06.571 } 00:20:06.571 }' 00:20:06.571 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.571 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:06.571 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:06.571 18:47:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.571 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:06.571 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:06.571 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.571 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:06.829 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:06.829 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.829 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:06.829 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:06.829 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:06.829 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:20:07.087 [2024-07-25 18:47:07.405508] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.087 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=0bbd291f-cd43-4ca8-8071-4940120e1dfa 00:20:07.087 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 0bbd291f-cd43-4ca8-8071-4940120e1dfa ']' 00:20:07.087 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:07.087 [2024-07-25 18:47:07.577270] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.087 [2024-07-25 18:47:07.577310] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.087 [2024-07-25 18:47:07.577432] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.087 [2024-07-25 18:47:07.577523] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.087 [2024-07-25 18:47:07.577535] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:20:07.087 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:20:07.087 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.346 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:20:07.346 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:20:07.346 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:07.346 18:47:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:07.604 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:07.604 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:07.862 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:07.862 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:07.862 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:07.862 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:08.121 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:08.380 [2024-07-25 18:47:08.846364] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:08.380 [2024-07-25 18:47:08.848778] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:08.380 [2024-07-25 18:47:08.848859] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:08.380 [2024-07-25 18:47:08.848919] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:08.380 [2024-07-25 18:47:08.849018] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:08.380 [2024-07-25 18:47:08.849053] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:08.380 [2024-07-25 18:47:08.849084] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.380 [2024-07-25 18:47:08.849094] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:20:08.380 request: 00:20:08.380 { 00:20:08.380 "name": "raid_bdev1", 00:20:08.380 "raid_level": "concat", 00:20:08.380 "base_bdevs": [ 00:20:08.380 "malloc1", 00:20:08.380 "malloc2", 00:20:08.380 "malloc3" 00:20:08.380 ], 00:20:08.380 "strip_size_kb": 64, 00:20:08.380 "superblock": false, 00:20:08.380 "method": "bdev_raid_create", 00:20:08.380 "req_id": 1 00:20:08.380 } 00:20:08.380 Got JSON-RPC error response 00:20:08.380 response: 00:20:08.380 { 00:20:08.380 "code": -17, 00:20:08.380 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:08.380 } 00:20:08.380 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:08.380 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.380 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.380 18:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.380 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.380 18:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:20:08.639 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:20:08.639 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:20:08.639 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.639 [2024-07-25 18:47:09.206288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.639 [2024-07-25 18:47:09.206399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.639 [2024-07-25 18:47:09.206440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:08.639 [2024-07-25 18:47:09.206464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.639 [2024-07-25 18:47:09.209186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.639 [2024-07-25 18:47:09.209238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.639 [2024-07-25 18:47:09.209376] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:08.639 [2024-07-25 18:47:09.209420] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:08.639 pt1 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.898 "name": "raid_bdev1", 00:20:08.898 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:08.898 "strip_size_kb": 64, 00:20:08.898 "state": "configuring", 00:20:08.898 "raid_level": "concat", 00:20:08.898 "superblock": true, 00:20:08.898 "num_base_bdevs": 3, 00:20:08.898 "num_base_bdevs_discovered": 1, 00:20:08.898 "num_base_bdevs_operational": 3, 00:20:08.898 "base_bdevs_list": [ 00:20:08.898 { 00:20:08.898 "name": "pt1", 00:20:08.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.898 "is_configured": true, 00:20:08.898 "data_offset": 2048, 00:20:08.898 "data_size": 63488 00:20:08.898 }, 00:20:08.898 { 00:20:08.898 "name": null, 00:20:08.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.898 "is_configured": false, 00:20:08.898 "data_offset": 2048, 00:20:08.898 "data_size": 63488 00:20:08.898 }, 00:20:08.898 { 00:20:08.898 "name": null, 00:20:08.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:08.898 "is_configured": false, 00:20:08.898 "data_offset": 2048, 00:20:08.898 "data_size": 63488 00:20:08.898 } 00:20:08.898 ] 00:20:08.898 }' 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.898 18:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.465 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:20:09.465 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.724 [2024-07-25 18:47:10.262495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.724 [2024-07-25 18:47:10.262605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.724 [2024-07-25 18:47:10.262649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:09.724 [2024-07-25 18:47:10.262672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.724 [2024-07-25 18:47:10.263231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.724 [2024-07-25 18:47:10.263276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.724 [2024-07-25 18:47:10.263401] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:09.724 [2024-07-25 18:47:10.263431] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.724 pt2 00:20:09.724 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:09.982 [2024-07-25 18:47:10.446536] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.982 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.983 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.983 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.983 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.241 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.241 "name": "raid_bdev1", 00:20:10.241 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:10.241 "strip_size_kb": 64, 00:20:10.241 "state": "configuring", 00:20:10.241 "raid_level": "concat", 00:20:10.241 "superblock": true, 00:20:10.241 "num_base_bdevs": 3, 00:20:10.241 "num_base_bdevs_discovered": 1, 00:20:10.241 "num_base_bdevs_operational": 3, 00:20:10.241 "base_bdevs_list": [ 00:20:10.241 { 00:20:10.241 "name": "pt1", 00:20:10.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.241 "is_configured": true, 00:20:10.241 "data_offset": 2048, 00:20:10.241 "data_size": 63488 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "name": null, 00:20:10.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.241 "is_configured": false, 00:20:10.241 "data_offset": 2048, 00:20:10.241 "data_size": 63488 00:20:10.241 }, 00:20:10.241 { 00:20:10.241 "name": null, 00:20:10.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:10.241 "is_configured": false, 00:20:10.241 "data_offset": 2048, 00:20:10.241 "data_size": 63488 00:20:10.241 } 00:20:10.241 ] 00:20:10.241 }' 00:20:10.241 18:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.241 18:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.812 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:20:10.812 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:10.812 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:11.069 [2024-07-25 18:47:11.450678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:11.069 [2024-07-25 18:47:11.450784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.069 [2024-07-25 18:47:11.450825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:11.069 [2024-07-25 18:47:11.450855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.069 [2024-07-25 18:47:11.451397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.069 [2024-07-25 18:47:11.451440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:11.069 [2024-07-25 18:47:11.451552] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:11.069 [2024-07-25 18:47:11.451581] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.069 pt2 00:20:11.069 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:11.069 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:11.069 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:11.326 [2024-07-25 18:47:11.722738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:11.326 [2024-07-25 18:47:11.722819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.326 [2024-07-25 18:47:11.722869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:11.326 [2024-07-25 18:47:11.722896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.326 [2024-07-25 18:47:11.723444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.326 [2024-07-25 18:47:11.723480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:11.326 [2024-07-25 18:47:11.723584] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:11.326 [2024-07-25 18:47:11.723605] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:11.326 [2024-07-25 18:47:11.723734] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:20:11.326 [2024-07-25 18:47:11.723750] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:11.326 [2024-07-25 18:47:11.723837] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:11.326 [2024-07-25 18:47:11.724165] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:20:11.326 [2024-07-25 18:47:11.724182] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:20:11.326 [2024-07-25 18:47:11.724324] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.326 pt3 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.326 18:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.584 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.584 "name": "raid_bdev1", 00:20:11.584 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:11.585 "strip_size_kb": 64, 00:20:11.585 "state": "online", 00:20:11.585 "raid_level": "concat", 00:20:11.585 "superblock": true, 00:20:11.585 "num_base_bdevs": 3, 00:20:11.585 "num_base_bdevs_discovered": 3, 00:20:11.585 "num_base_bdevs_operational": 3, 00:20:11.585 "base_bdevs_list": [ 00:20:11.585 { 00:20:11.585 "name": "pt1", 00:20:11.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.585 "is_configured": true, 00:20:11.585 "data_offset": 2048, 00:20:11.585 "data_size": 63488 00:20:11.585 }, 00:20:11.585 { 00:20:11.585 "name": "pt2", 00:20:11.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.585 "is_configured": true, 00:20:11.585 "data_offset": 2048, 00:20:11.585 "data_size": 63488 00:20:11.585 }, 00:20:11.585 { 00:20:11.585 "name": "pt3", 00:20:11.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:11.585 "is_configured": true, 00:20:11.585 "data_offset": 2048, 00:20:11.585 "data_size": 63488 00:20:11.585 } 00:20:11.585 ] 00:20:11.585 }' 00:20:11.585 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.585 18:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:12.151 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:12.409 [2024-07-25 18:47:12.823156] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.409 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:12.409 "name": "raid_bdev1", 00:20:12.409 "aliases": [ 00:20:12.409 "0bbd291f-cd43-4ca8-8071-4940120e1dfa" 00:20:12.409 ], 00:20:12.409 "product_name": "Raid Volume", 00:20:12.409 "block_size": 512, 00:20:12.409 "num_blocks": 190464, 00:20:12.409 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:12.409 "assigned_rate_limits": { 00:20:12.409 "rw_ios_per_sec": 0, 00:20:12.409 "rw_mbytes_per_sec": 0, 00:20:12.409 "r_mbytes_per_sec": 0, 00:20:12.409 "w_mbytes_per_sec": 0 00:20:12.409 }, 00:20:12.409 "claimed": false, 00:20:12.409 "zoned": false, 00:20:12.409 "supported_io_types": { 00:20:12.409 "read": true, 00:20:12.409 "write": true, 00:20:12.409 "unmap": true, 00:20:12.409 "flush": true, 00:20:12.409 "reset": true, 00:20:12.409 "nvme_admin": false, 00:20:12.409 "nvme_io": false, 00:20:12.409 "nvme_io_md": false, 00:20:12.409 "write_zeroes": true, 00:20:12.409 "zcopy": false, 00:20:12.409 "get_zone_info": false, 00:20:12.409 "zone_management": false, 00:20:12.409 "zone_append": false, 00:20:12.409 "compare": false, 00:20:12.409 "compare_and_write": false, 00:20:12.409 "abort": false, 00:20:12.409 "seek_hole": false, 00:20:12.409 "seek_data": false, 00:20:12.409 "copy": false, 00:20:12.409 "nvme_iov_md": false 00:20:12.409 }, 00:20:12.409 "memory_domains": [ 00:20:12.409 { 00:20:12.409 "dma_device_id": "system", 00:20:12.409 "dma_device_type": 1 00:20:12.409 }, 00:20:12.409 { 00:20:12.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.409 "dma_device_type": 2 00:20:12.409 }, 00:20:12.409 { 00:20:12.409 "dma_device_id": "system", 00:20:12.409 "dma_device_type": 1 00:20:12.409 }, 00:20:12.409 { 00:20:12.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.409 "dma_device_type": 2 00:20:12.409 }, 00:20:12.409 { 00:20:12.409 "dma_device_id": "system", 00:20:12.409 "dma_device_type": 1 00:20:12.409 }, 00:20:12.409 { 00:20:12.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.409 "dma_device_type": 2 00:20:12.409 } 00:20:12.409 ], 00:20:12.409 "driver_specific": { 00:20:12.409 "raid": { 00:20:12.409 "uuid": "0bbd291f-cd43-4ca8-8071-4940120e1dfa", 00:20:12.409 "strip_size_kb": 64, 00:20:12.409 "state": "online", 00:20:12.409 "raid_level": "concat", 00:20:12.409 "superblock": true, 00:20:12.409 "num_base_bdevs": 3, 00:20:12.409 "num_base_bdevs_discovered": 3, 00:20:12.409 "num_base_bdevs_operational": 3, 00:20:12.409 "base_bdevs_list": [ 00:20:12.409 { 00:20:12.409 "name": "pt1", 00:20:12.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:12.409 "is_configured": true, 00:20:12.409 "data_offset": 2048, 00:20:12.409 "data_size": 63488 00:20:12.409 }, 00:20:12.409 { 00:20:12.409 "name": "pt2", 00:20:12.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.409 "is_configured": true, 00:20:12.409 "data_offset": 2048, 00:20:12.409 "data_size": 63488 00:20:12.409 }, 00:20:12.409 { 00:20:12.409 "name": "pt3", 00:20:12.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:12.409 "is_configured": true, 00:20:12.409 "data_offset": 2048, 00:20:12.409 "data_size": 63488 00:20:12.409 } 00:20:12.409 ] 00:20:12.409 } 00:20:12.409 } 00:20:12.409 }' 00:20:12.409 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.409 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:12.409 pt2 00:20:12.410 pt3' 00:20:12.410 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.410 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:12.410 18:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.667 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:12.667 "name": "pt1", 00:20:12.667 "aliases": [ 00:20:12.667 "00000000-0000-0000-0000-000000000001" 00:20:12.667 ], 00:20:12.667 "product_name": "passthru", 00:20:12.667 "block_size": 512, 00:20:12.667 "num_blocks": 65536, 00:20:12.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:12.667 "assigned_rate_limits": { 00:20:12.667 "rw_ios_per_sec": 0, 00:20:12.667 "rw_mbytes_per_sec": 0, 00:20:12.667 "r_mbytes_per_sec": 0, 00:20:12.667 "w_mbytes_per_sec": 0 00:20:12.667 }, 00:20:12.667 "claimed": true, 00:20:12.667 "claim_type": "exclusive_write", 00:20:12.667 "zoned": false, 00:20:12.667 "supported_io_types": { 00:20:12.667 "read": true, 00:20:12.667 "write": true, 00:20:12.667 "unmap": true, 00:20:12.667 "flush": true, 00:20:12.667 "reset": true, 00:20:12.667 "nvme_admin": false, 00:20:12.667 "nvme_io": false, 00:20:12.667 "nvme_io_md": false, 00:20:12.667 "write_zeroes": true, 00:20:12.667 "zcopy": true, 00:20:12.667 "get_zone_info": false, 00:20:12.667 "zone_management": false, 00:20:12.667 "zone_append": false, 00:20:12.667 "compare": false, 00:20:12.667 "compare_and_write": false, 00:20:12.667 "abort": true, 00:20:12.667 "seek_hole": false, 00:20:12.667 "seek_data": false, 00:20:12.667 "copy": true, 00:20:12.667 "nvme_iov_md": false 00:20:12.667 }, 00:20:12.667 "memory_domains": [ 00:20:12.667 { 00:20:12.667 "dma_device_id": "system", 00:20:12.667 "dma_device_type": 1 00:20:12.667 }, 00:20:12.667 { 00:20:12.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.668 "dma_device_type": 2 00:20:12.668 } 00:20:12.668 ], 00:20:12.668 "driver_specific": { 00:20:12.668 "passthru": { 00:20:12.668 "name": "pt1", 00:20:12.668 "base_bdev_name": "malloc1" 00:20:12.668 } 00:20:12.668 } 00:20:12.668 }' 00:20:12.668 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.668 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.668 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:12.668 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:12.925 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:13.183 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:13.183 "name": "pt2", 00:20:13.183 "aliases": [ 00:20:13.183 "00000000-0000-0000-0000-000000000002" 00:20:13.183 ], 00:20:13.183 "product_name": "passthru", 00:20:13.183 "block_size": 512, 00:20:13.183 "num_blocks": 65536, 00:20:13.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.183 "assigned_rate_limits": { 00:20:13.183 "rw_ios_per_sec": 0, 00:20:13.183 "rw_mbytes_per_sec": 0, 00:20:13.183 "r_mbytes_per_sec": 0, 00:20:13.183 "w_mbytes_per_sec": 0 00:20:13.183 }, 00:20:13.183 "claimed": true, 00:20:13.183 "claim_type": "exclusive_write", 00:20:13.183 "zoned": false, 00:20:13.183 "supported_io_types": { 00:20:13.183 "read": true, 00:20:13.183 "write": true, 00:20:13.183 "unmap": true, 00:20:13.183 "flush": true, 00:20:13.183 "reset": true, 00:20:13.183 "nvme_admin": false, 00:20:13.183 "nvme_io": false, 00:20:13.183 "nvme_io_md": false, 00:20:13.183 "write_zeroes": true, 00:20:13.183 "zcopy": true, 00:20:13.183 "get_zone_info": false, 00:20:13.183 "zone_management": false, 00:20:13.183 "zone_append": false, 00:20:13.183 "compare": false, 00:20:13.183 "compare_and_write": false, 00:20:13.183 "abort": true, 00:20:13.183 "seek_hole": false, 00:20:13.183 "seek_data": false, 00:20:13.183 "copy": true, 00:20:13.183 "nvme_iov_md": false 00:20:13.183 }, 00:20:13.183 "memory_domains": [ 00:20:13.183 { 00:20:13.183 "dma_device_id": "system", 00:20:13.183 "dma_device_type": 1 00:20:13.183 }, 00:20:13.183 { 00:20:13.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.183 "dma_device_type": 2 00:20:13.183 } 00:20:13.183 ], 00:20:13.183 "driver_specific": { 00:20:13.183 "passthru": { 00:20:13.183 "name": "pt2", 00:20:13.183 "base_bdev_name": "malloc2" 00:20:13.183 } 00:20:13.183 } 00:20:13.183 }' 00:20:13.183 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:13.441 18:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.700 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.700 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:13.700 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:13.700 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:13.700 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:13.956 "name": "pt3", 00:20:13.956 "aliases": [ 00:20:13.956 "00000000-0000-0000-0000-000000000003" 00:20:13.956 ], 00:20:13.956 "product_name": "passthru", 00:20:13.956 "block_size": 512, 00:20:13.956 "num_blocks": 65536, 00:20:13.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:13.956 "assigned_rate_limits": { 00:20:13.956 "rw_ios_per_sec": 0, 00:20:13.956 "rw_mbytes_per_sec": 0, 00:20:13.956 "r_mbytes_per_sec": 0, 00:20:13.956 "w_mbytes_per_sec": 0 00:20:13.956 }, 00:20:13.956 "claimed": true, 00:20:13.956 "claim_type": "exclusive_write", 00:20:13.956 "zoned": false, 00:20:13.956 "supported_io_types": { 00:20:13.956 "read": true, 00:20:13.956 "write": true, 00:20:13.956 "unmap": true, 00:20:13.956 "flush": true, 00:20:13.956 "reset": true, 00:20:13.956 "nvme_admin": false, 00:20:13.956 "nvme_io": false, 00:20:13.956 "nvme_io_md": false, 00:20:13.956 "write_zeroes": true, 00:20:13.956 "zcopy": true, 00:20:13.956 "get_zone_info": false, 00:20:13.956 "zone_management": false, 00:20:13.956 "zone_append": false, 00:20:13.956 "compare": false, 00:20:13.956 "compare_and_write": false, 00:20:13.956 "abort": true, 00:20:13.956 "seek_hole": false, 00:20:13.956 "seek_data": false, 00:20:13.956 "copy": true, 00:20:13.956 "nvme_iov_md": false 00:20:13.956 }, 00:20:13.956 "memory_domains": [ 00:20:13.956 { 00:20:13.956 "dma_device_id": "system", 00:20:13.956 "dma_device_type": 1 00:20:13.956 }, 00:20:13.956 { 00:20:13.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.956 "dma_device_type": 2 00:20:13.956 } 00:20:13.956 ], 00:20:13.956 "driver_specific": { 00:20:13.956 "passthru": { 00:20:13.956 "name": "pt3", 00:20:13.956 "base_bdev_name": "malloc3" 00:20:13.956 } 00:20:13.956 } 00:20:13.956 }' 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:13.956 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:14.214 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:14.214 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:14.214 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:14.214 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:14.214 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:14.214 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:14.214 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:20:14.472 [2024-07-25 18:47:14.943478] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 0bbd291f-cd43-4ca8-8071-4940120e1dfa '!=' 0bbd291f-cd43-4ca8-8071-4940120e1dfa ']' 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 129500 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 129500 ']' 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 129500 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:14.472 18:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 129500 00:20:14.472 18:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:14.472 18:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:14.472 18:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 129500' 00:20:14.472 killing process with pid 129500 00:20:14.472 18:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 129500 00:20:14.472 [2024-07-25 18:47:15.007422] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.472 18:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 129500 00:20:14.472 [2024-07-25 18:47:15.007686] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.472 [2024-07-25 18:47:15.007910] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.472 [2024-07-25 18:47:15.008001] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:20:14.730 [2024-07-25 18:47:15.262886] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.102 ************************************ 00:20:16.102 END TEST raid_superblock_test 00:20:16.102 ************************************ 00:20:16.102 18:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:20:16.102 00:20:16.102 real 0m14.336s 00:20:16.102 user 0m24.562s 00:20:16.102 sys 0m2.506s 00:20:16.102 18:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:16.102 18:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.102 18:47:16 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:20:16.102 18:47:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:16.102 18:47:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:16.102 18:47:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.102 ************************************ 00:20:16.102 START TEST raid_read_error_test 00:20:16.102 ************************************ 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.zFrLE3dsGx 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=130213 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 130213 /var/tmp/spdk-raid.sock 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 130213 ']' 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:16.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.102 18:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.102 [2024-07-25 18:47:16.635662] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:16.102 [2024-07-25 18:47:16.636124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130213 ] 00:20:16.359 [2024-07-25 18:47:16.826290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.617 [2024-07-25 18:47:17.135160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.875 [2024-07-25 18:47:17.412105] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.133 18:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.133 18:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:17.133 18:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:17.133 18:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:17.391 BaseBdev1_malloc 00:20:17.391 18:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:17.649 true 00:20:17.649 18:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:17.649 [2024-07-25 18:47:18.213698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:17.649 [2024-07-25 18:47:18.213974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.649 [2024-07-25 18:47:18.214054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:20:17.649 [2024-07-25 18:47:18.214157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.649 [2024-07-25 18:47:18.216870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.649 [2024-07-25 18:47:18.217033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:17.649 BaseBdev1 00:20:17.907 18:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:17.907 18:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:18.164 BaseBdev2_malloc 00:20:18.164 18:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:18.164 true 00:20:18.422 18:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:18.422 [2024-07-25 18:47:18.905184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:18.422 [2024-07-25 18:47:18.905504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.422 [2024-07-25 18:47:18.905584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:18.422 [2024-07-25 18:47:18.905865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.422 [2024-07-25 18:47:18.908533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.422 [2024-07-25 18:47:18.908700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:18.422 BaseBdev2 00:20:18.422 18:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:18.422 18:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:18.680 BaseBdev3_malloc 00:20:18.680 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:18.938 true 00:20:18.938 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:18.938 [2024-07-25 18:47:19.500853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:18.938 [2024-07-25 18:47:19.501073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.938 [2024-07-25 18:47:19.501146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:18.938 [2024-07-25 18:47:19.501236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.938 [2024-07-25 18:47:19.503899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.938 [2024-07-25 18:47:19.504069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:18.938 BaseBdev3 00:20:19.196 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:19.196 [2024-07-25 18:47:19.753029] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:19.196 [2024-07-25 18:47:19.755486] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:19.196 [2024-07-25 18:47:19.755707] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:19.196 [2024-07-25 18:47:19.755944] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:20:19.196 [2024-07-25 18:47:19.756050] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:19.196 [2024-07-25 18:47:19.756240] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:19.196 [2024-07-25 18:47:19.756694] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:20:19.196 [2024-07-25 18:47:19.756798] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:20:19.196 [2024-07-25 18:47:19.757077] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.454 "name": "raid_bdev1", 00:20:19.454 "uuid": "a77b4d8e-2cd1-4602-96a5-77ead3fe550d", 00:20:19.454 "strip_size_kb": 64, 00:20:19.454 "state": "online", 00:20:19.454 "raid_level": "concat", 00:20:19.454 "superblock": true, 00:20:19.454 "num_base_bdevs": 3, 00:20:19.454 "num_base_bdevs_discovered": 3, 00:20:19.454 "num_base_bdevs_operational": 3, 00:20:19.454 "base_bdevs_list": [ 00:20:19.454 { 00:20:19.454 "name": "BaseBdev1", 00:20:19.454 "uuid": "a4bb9d05-8073-57ac-bded-2208f051a1b8", 00:20:19.454 "is_configured": true, 00:20:19.454 "data_offset": 2048, 00:20:19.454 "data_size": 63488 00:20:19.454 }, 00:20:19.454 { 00:20:19.454 "name": "BaseBdev2", 00:20:19.454 "uuid": "44a0055d-3d45-5f13-bb70-720dad2c745b", 00:20:19.454 "is_configured": true, 00:20:19.454 "data_offset": 2048, 00:20:19.454 "data_size": 63488 00:20:19.454 }, 00:20:19.454 { 00:20:19.454 "name": "BaseBdev3", 00:20:19.454 "uuid": "7298fd34-2479-52e9-93b2-3767cb02364d", 00:20:19.454 "is_configured": true, 00:20:19.454 "data_offset": 2048, 00:20:19.454 "data_size": 63488 00:20:19.454 } 00:20:19.454 ] 00:20:19.454 }' 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.454 18:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.042 18:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:20.042 18:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:20.042 [2024-07-25 18:47:20.558785] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:21.044 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.302 18:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.560 18:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.560 "name": "raid_bdev1", 00:20:21.560 "uuid": "a77b4d8e-2cd1-4602-96a5-77ead3fe550d", 00:20:21.560 "strip_size_kb": 64, 00:20:21.560 "state": "online", 00:20:21.560 "raid_level": "concat", 00:20:21.560 "superblock": true, 00:20:21.560 "num_base_bdevs": 3, 00:20:21.560 "num_base_bdevs_discovered": 3, 00:20:21.560 "num_base_bdevs_operational": 3, 00:20:21.560 "base_bdevs_list": [ 00:20:21.560 { 00:20:21.560 "name": "BaseBdev1", 00:20:21.560 "uuid": "a4bb9d05-8073-57ac-bded-2208f051a1b8", 00:20:21.560 "is_configured": true, 00:20:21.560 "data_offset": 2048, 00:20:21.560 "data_size": 63488 00:20:21.560 }, 00:20:21.560 { 00:20:21.560 "name": "BaseBdev2", 00:20:21.560 "uuid": "44a0055d-3d45-5f13-bb70-720dad2c745b", 00:20:21.560 "is_configured": true, 00:20:21.560 "data_offset": 2048, 00:20:21.560 "data_size": 63488 00:20:21.560 }, 00:20:21.560 { 00:20:21.560 "name": "BaseBdev3", 00:20:21.560 "uuid": "7298fd34-2479-52e9-93b2-3767cb02364d", 00:20:21.560 "is_configured": true, 00:20:21.560 "data_offset": 2048, 00:20:21.560 "data_size": 63488 00:20:21.560 } 00:20:21.560 ] 00:20:21.560 }' 00:20:21.560 18:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.560 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.126 18:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:22.384 [2024-07-25 18:47:22.763240] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.384 [2024-07-25 18:47:22.763539] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.384 [2024-07-25 18:47:22.766238] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.384 [2024-07-25 18:47:22.766406] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.384 [2024-07-25 18:47:22.766517] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.384 [2024-07-25 18:47:22.766602] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:20:22.384 0 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 130213 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 130213 ']' 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 130213 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130213 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130213' 00:20:22.384 killing process with pid 130213 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 130213 00:20:22.384 [2024-07-25 18:47:22.813116] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:22.384 18:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 130213 00:20:22.642 [2024-07-25 18:47:23.063542] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:24.014 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.zFrLE3dsGx 00:20:24.014 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:24.014 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:24.272 ************************************ 00:20:24.272 END TEST raid_read_error_test 00:20:24.272 ************************************ 00:20:24.272 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.45 00:20:24.272 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:24.272 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:24.272 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:24.272 18:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.45 != \0\.\0\0 ]] 00:20:24.272 00:20:24.272 real 0m8.067s 00:20:24.272 user 0m11.346s 00:20:24.272 sys 0m1.340s 00:20:24.272 18:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.272 18:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.272 18:47:24 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:20:24.272 18:47:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:24.272 18:47:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.272 18:47:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:24.272 ************************************ 00:20:24.272 START TEST raid_write_error_test 00:20:24.272 ************************************ 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.HzcqQkEx1q 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=130420 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 130420 /var/tmp/spdk-raid.sock 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 130420 ']' 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:24.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.272 18:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.272 [2024-07-25 18:47:24.775155] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:24.272 [2024-07-25 18:47:24.775646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130420 ] 00:20:24.530 [2024-07-25 18:47:24.962450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.788 [2024-07-25 18:47:25.211731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.045 [2024-07-25 18:47:25.479869] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:25.303 18:47:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.303 18:47:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:25.303 18:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:25.303 18:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:25.561 BaseBdev1_malloc 00:20:25.561 18:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:25.819 true 00:20:25.819 18:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:25.819 [2024-07-25 18:47:26.391284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:25.819 [2024-07-25 18:47:26.391574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.819 [2024-07-25 18:47:26.391649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:20:25.819 [2024-07-25 18:47:26.391754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.819 [2024-07-25 18:47:26.394484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.819 [2024-07-25 18:47:26.394656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:26.077 BaseBdev1 00:20:26.077 18:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:26.077 18:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:26.335 BaseBdev2_malloc 00:20:26.335 18:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:26.335 true 00:20:26.335 18:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:26.592 [2024-07-25 18:47:27.064590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:26.592 [2024-07-25 18:47:27.064910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.592 [2024-07-25 18:47:27.065050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:26.592 [2024-07-25 18:47:27.065159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.592 [2024-07-25 18:47:27.067818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.592 [2024-07-25 18:47:27.067976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:26.592 BaseBdev2 00:20:26.592 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:26.592 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:26.850 BaseBdev3_malloc 00:20:26.850 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:27.108 true 00:20:27.108 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:27.366 [2024-07-25 18:47:27.796378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:27.366 [2024-07-25 18:47:27.796642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.366 [2024-07-25 18:47:27.796723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:27.366 [2024-07-25 18:47:27.796820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.366 [2024-07-25 18:47:27.799584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.366 [2024-07-25 18:47:27.799754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:27.366 BaseBdev3 00:20:27.366 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:27.624 [2024-07-25 18:47:27.984540] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.624 [2024-07-25 18:47:27.986907] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.624 [2024-07-25 18:47:27.987116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.624 [2024-07-25 18:47:27.987345] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:20:27.624 [2024-07-25 18:47:27.987396] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:27.624 [2024-07-25 18:47:27.987607] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:27.624 [2024-07-25 18:47:27.988053] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:20:27.624 [2024-07-25 18:47:27.988158] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:20:27.624 [2024-07-25 18:47:27.988414] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.625 18:47:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.625 18:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.625 18:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.883 18:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.883 "name": "raid_bdev1", 00:20:27.883 "uuid": "c1943d71-bdd9-412e-ae29-d9277a16b9e3", 00:20:27.883 "strip_size_kb": 64, 00:20:27.883 "state": "online", 00:20:27.883 "raid_level": "concat", 00:20:27.883 "superblock": true, 00:20:27.883 "num_base_bdevs": 3, 00:20:27.883 "num_base_bdevs_discovered": 3, 00:20:27.883 "num_base_bdevs_operational": 3, 00:20:27.883 "base_bdevs_list": [ 00:20:27.883 { 00:20:27.883 "name": "BaseBdev1", 00:20:27.883 "uuid": "8547bfb9-3f77-56f0-b57f-edc5eef09ec2", 00:20:27.883 "is_configured": true, 00:20:27.883 "data_offset": 2048, 00:20:27.883 "data_size": 63488 00:20:27.883 }, 00:20:27.883 { 00:20:27.883 "name": "BaseBdev2", 00:20:27.883 "uuid": "d98b8ce0-0279-5dcd-bef8-50819577450c", 00:20:27.883 "is_configured": true, 00:20:27.883 "data_offset": 2048, 00:20:27.883 "data_size": 63488 00:20:27.883 }, 00:20:27.883 { 00:20:27.883 "name": "BaseBdev3", 00:20:27.883 "uuid": "ffe80ef8-686e-51f2-8901-683db3a1e077", 00:20:27.883 "is_configured": true, 00:20:27.883 "data_offset": 2048, 00:20:27.883 "data_size": 63488 00:20:27.883 } 00:20:27.883 ] 00:20:27.883 }' 00:20:27.883 18:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.883 18:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.449 18:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:28.449 18:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:28.449 [2024-07-25 18:47:28.858297] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:29.383 18:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.641 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.899 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.899 "name": "raid_bdev1", 00:20:29.899 "uuid": "c1943d71-bdd9-412e-ae29-d9277a16b9e3", 00:20:29.899 "strip_size_kb": 64, 00:20:29.899 "state": "online", 00:20:29.899 "raid_level": "concat", 00:20:29.899 "superblock": true, 00:20:29.899 "num_base_bdevs": 3, 00:20:29.899 "num_base_bdevs_discovered": 3, 00:20:29.899 "num_base_bdevs_operational": 3, 00:20:29.900 "base_bdevs_list": [ 00:20:29.900 { 00:20:29.900 "name": "BaseBdev1", 00:20:29.900 "uuid": "8547bfb9-3f77-56f0-b57f-edc5eef09ec2", 00:20:29.900 "is_configured": true, 00:20:29.900 "data_offset": 2048, 00:20:29.900 "data_size": 63488 00:20:29.900 }, 00:20:29.900 { 00:20:29.900 "name": "BaseBdev2", 00:20:29.900 "uuid": "d98b8ce0-0279-5dcd-bef8-50819577450c", 00:20:29.900 "is_configured": true, 00:20:29.900 "data_offset": 2048, 00:20:29.900 "data_size": 63488 00:20:29.900 }, 00:20:29.900 { 00:20:29.900 "name": "BaseBdev3", 00:20:29.900 "uuid": "ffe80ef8-686e-51f2-8901-683db3a1e077", 00:20:29.900 "is_configured": true, 00:20:29.900 "data_offset": 2048, 00:20:29.900 "data_size": 63488 00:20:29.900 } 00:20:29.900 ] 00:20:29.900 }' 00:20:29.900 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.900 18:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.465 18:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:30.723 [2024-07-25 18:47:31.043479] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.723 [2024-07-25 18:47:31.043738] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.723 [2024-07-25 18:47:31.046433] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.723 [2024-07-25 18:47:31.046598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.723 [2024-07-25 18:47:31.046673] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.723 [2024-07-25 18:47:31.046750] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:20:30.723 0 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 130420 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 130420 ']' 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 130420 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130420 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130420' 00:20:30.723 killing process with pid 130420 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 130420 00:20:30.723 [2024-07-25 18:47:31.096601] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:30.723 18:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 130420 00:20:30.981 [2024-07-25 18:47:31.348395] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.HzcqQkEx1q 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:32.356 ************************************ 00:20:32.356 END TEST raid_write_error_test 00:20:32.356 ************************************ 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:20:32.356 00:20:32.356 real 0m8.227s 00:20:32.356 user 0m11.900s 00:20:32.356 sys 0m1.173s 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:32.356 18:47:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.614 18:47:32 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:20:32.614 18:47:32 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:20:32.614 18:47:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:32.614 18:47:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:32.614 18:47:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.614 ************************************ 00:20:32.614 START TEST raid_state_function_test 00:20:32.614 ************************************ 00:20:32.614 18:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:20:32.614 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:32.614 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=130625 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 130625' 00:20:32.615 Process raid pid: 130625 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 130625 /var/tmp/spdk-raid.sock 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 130625 ']' 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:32.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.615 18:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.615 [2024-07-25 18:47:33.066442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:32.615 [2024-07-25 18:47:33.066916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.873 [2024-07-25 18:47:33.254047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.131 [2024-07-25 18:47:33.469955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.131 [2024-07-25 18:47:33.661498] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.389 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.389 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:20:33.389 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:33.647 [2024-07-25 18:47:34.079947] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:33.647 [2024-07-25 18:47:34.080251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:33.647 [2024-07-25 18:47:34.080353] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:33.647 [2024-07-25 18:47:34.080465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:33.647 [2024-07-25 18:47:34.080529] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:33.647 [2024-07-25 18:47:34.080575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.647 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.905 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.905 "name": "Existed_Raid", 00:20:33.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.905 "strip_size_kb": 0, 00:20:33.905 "state": "configuring", 00:20:33.905 "raid_level": "raid1", 00:20:33.905 "superblock": false, 00:20:33.905 "num_base_bdevs": 3, 00:20:33.905 "num_base_bdevs_discovered": 0, 00:20:33.905 "num_base_bdevs_operational": 3, 00:20:33.905 "base_bdevs_list": [ 00:20:33.906 { 00:20:33.906 "name": "BaseBdev1", 00:20:33.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.906 "is_configured": false, 00:20:33.906 "data_offset": 0, 00:20:33.906 "data_size": 0 00:20:33.906 }, 00:20:33.906 { 00:20:33.906 "name": "BaseBdev2", 00:20:33.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.906 "is_configured": false, 00:20:33.906 "data_offset": 0, 00:20:33.906 "data_size": 0 00:20:33.906 }, 00:20:33.906 { 00:20:33.906 "name": "BaseBdev3", 00:20:33.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.906 "is_configured": false, 00:20:33.906 "data_offset": 0, 00:20:33.906 "data_size": 0 00:20:33.906 } 00:20:33.906 ] 00:20:33.906 }' 00:20:33.906 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.906 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.472 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:34.731 [2024-07-25 18:47:35.160036] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.731 [2024-07-25 18:47:35.160083] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:20:34.731 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:34.989 [2024-07-25 18:47:35.336058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:34.989 [2024-07-25 18:47:35.336125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:34.989 [2024-07-25 18:47:35.336135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.989 [2024-07-25 18:47:35.336168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.989 [2024-07-25 18:47:35.336175] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:34.989 [2024-07-25 18:47:35.336199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:34.989 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:35.247 [2024-07-25 18:47:35.610766] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.247 BaseBdev1 00:20:35.247 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:35.247 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:35.247 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:35.247 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:35.247 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:35.247 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:35.247 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:35.505 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:35.505 [ 00:20:35.505 { 00:20:35.505 "name": "BaseBdev1", 00:20:35.505 "aliases": [ 00:20:35.505 "28dbd92c-abac-4bbc-a7ea-95e0c55265a8" 00:20:35.505 ], 00:20:35.505 "product_name": "Malloc disk", 00:20:35.505 "block_size": 512, 00:20:35.505 "num_blocks": 65536, 00:20:35.505 "uuid": "28dbd92c-abac-4bbc-a7ea-95e0c55265a8", 00:20:35.505 "assigned_rate_limits": { 00:20:35.505 "rw_ios_per_sec": 0, 00:20:35.505 "rw_mbytes_per_sec": 0, 00:20:35.505 "r_mbytes_per_sec": 0, 00:20:35.505 "w_mbytes_per_sec": 0 00:20:35.505 }, 00:20:35.505 "claimed": true, 00:20:35.505 "claim_type": "exclusive_write", 00:20:35.505 "zoned": false, 00:20:35.505 "supported_io_types": { 00:20:35.505 "read": true, 00:20:35.505 "write": true, 00:20:35.505 "unmap": true, 00:20:35.505 "flush": true, 00:20:35.505 "reset": true, 00:20:35.505 "nvme_admin": false, 00:20:35.505 "nvme_io": false, 00:20:35.505 "nvme_io_md": false, 00:20:35.505 "write_zeroes": true, 00:20:35.505 "zcopy": true, 00:20:35.505 "get_zone_info": false, 00:20:35.505 "zone_management": false, 00:20:35.505 "zone_append": false, 00:20:35.505 "compare": false, 00:20:35.505 "compare_and_write": false, 00:20:35.505 "abort": true, 00:20:35.505 "seek_hole": false, 00:20:35.505 "seek_data": false, 00:20:35.505 "copy": true, 00:20:35.505 "nvme_iov_md": false 00:20:35.505 }, 00:20:35.505 "memory_domains": [ 00:20:35.505 { 00:20:35.505 "dma_device_id": "system", 00:20:35.505 "dma_device_type": 1 00:20:35.505 }, 00:20:35.505 { 00:20:35.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.505 "dma_device_type": 2 00:20:35.505 } 00:20:35.505 ], 00:20:35.505 "driver_specific": {} 00:20:35.505 } 00:20:35.505 ] 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.505 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.765 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.765 "name": "Existed_Raid", 00:20:35.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.765 "strip_size_kb": 0, 00:20:35.765 "state": "configuring", 00:20:35.765 "raid_level": "raid1", 00:20:35.765 "superblock": false, 00:20:35.765 "num_base_bdevs": 3, 00:20:35.765 "num_base_bdevs_discovered": 1, 00:20:35.765 "num_base_bdevs_operational": 3, 00:20:35.765 "base_bdevs_list": [ 00:20:35.765 { 00:20:35.765 "name": "BaseBdev1", 00:20:35.765 "uuid": "28dbd92c-abac-4bbc-a7ea-95e0c55265a8", 00:20:35.765 "is_configured": true, 00:20:35.765 "data_offset": 0, 00:20:35.765 "data_size": 65536 00:20:35.765 }, 00:20:35.765 { 00:20:35.765 "name": "BaseBdev2", 00:20:35.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.765 "is_configured": false, 00:20:35.765 "data_offset": 0, 00:20:35.765 "data_size": 0 00:20:35.765 }, 00:20:35.765 { 00:20:35.765 "name": "BaseBdev3", 00:20:35.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.765 "is_configured": false, 00:20:35.765 "data_offset": 0, 00:20:35.765 "data_size": 0 00:20:35.765 } 00:20:35.765 ] 00:20:35.765 }' 00:20:35.765 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.765 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.332 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:36.590 [2024-07-25 18:47:37.007122] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.590 [2024-07-25 18:47:37.007199] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:20:36.590 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:36.848 [2024-07-25 18:47:37.191176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.848 [2024-07-25 18:47:37.193478] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.848 [2024-07-25 18:47:37.193558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.848 [2024-07-25 18:47:37.193570] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.848 [2024-07-25 18:47:37.193630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.848 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.105 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.105 "name": "Existed_Raid", 00:20:37.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.105 "strip_size_kb": 0, 00:20:37.105 "state": "configuring", 00:20:37.105 "raid_level": "raid1", 00:20:37.105 "superblock": false, 00:20:37.105 "num_base_bdevs": 3, 00:20:37.105 "num_base_bdevs_discovered": 1, 00:20:37.105 "num_base_bdevs_operational": 3, 00:20:37.105 "base_bdevs_list": [ 00:20:37.105 { 00:20:37.106 "name": "BaseBdev1", 00:20:37.106 "uuid": "28dbd92c-abac-4bbc-a7ea-95e0c55265a8", 00:20:37.106 "is_configured": true, 00:20:37.106 "data_offset": 0, 00:20:37.106 "data_size": 65536 00:20:37.106 }, 00:20:37.106 { 00:20:37.106 "name": "BaseBdev2", 00:20:37.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.106 "is_configured": false, 00:20:37.106 "data_offset": 0, 00:20:37.106 "data_size": 0 00:20:37.106 }, 00:20:37.106 { 00:20:37.106 "name": "BaseBdev3", 00:20:37.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.106 "is_configured": false, 00:20:37.106 "data_offset": 0, 00:20:37.106 "data_size": 0 00:20:37.106 } 00:20:37.106 ] 00:20:37.106 }' 00:20:37.106 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.106 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.672 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:37.930 [2024-07-25 18:47:38.317638] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:37.930 BaseBdev2 00:20:37.930 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:37.930 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:37.930 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:37.930 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:37.930 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:37.930 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:37.930 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:38.188 [ 00:20:38.188 { 00:20:38.188 "name": "BaseBdev2", 00:20:38.188 "aliases": [ 00:20:38.188 "71919860-f549-488d-a3d5-4c152caf0b6b" 00:20:38.188 ], 00:20:38.188 "product_name": "Malloc disk", 00:20:38.188 "block_size": 512, 00:20:38.188 "num_blocks": 65536, 00:20:38.188 "uuid": "71919860-f549-488d-a3d5-4c152caf0b6b", 00:20:38.188 "assigned_rate_limits": { 00:20:38.188 "rw_ios_per_sec": 0, 00:20:38.188 "rw_mbytes_per_sec": 0, 00:20:38.188 "r_mbytes_per_sec": 0, 00:20:38.188 "w_mbytes_per_sec": 0 00:20:38.188 }, 00:20:38.188 "claimed": true, 00:20:38.188 "claim_type": "exclusive_write", 00:20:38.188 "zoned": false, 00:20:38.188 "supported_io_types": { 00:20:38.188 "read": true, 00:20:38.188 "write": true, 00:20:38.188 "unmap": true, 00:20:38.188 "flush": true, 00:20:38.188 "reset": true, 00:20:38.188 "nvme_admin": false, 00:20:38.188 "nvme_io": false, 00:20:38.188 "nvme_io_md": false, 00:20:38.188 "write_zeroes": true, 00:20:38.188 "zcopy": true, 00:20:38.188 "get_zone_info": false, 00:20:38.188 "zone_management": false, 00:20:38.188 "zone_append": false, 00:20:38.188 "compare": false, 00:20:38.188 "compare_and_write": false, 00:20:38.188 "abort": true, 00:20:38.188 "seek_hole": false, 00:20:38.188 "seek_data": false, 00:20:38.188 "copy": true, 00:20:38.188 "nvme_iov_md": false 00:20:38.188 }, 00:20:38.188 "memory_domains": [ 00:20:38.188 { 00:20:38.188 "dma_device_id": "system", 00:20:38.188 "dma_device_type": 1 00:20:38.188 }, 00:20:38.188 { 00:20:38.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.188 "dma_device_type": 2 00:20:38.188 } 00:20:38.188 ], 00:20:38.188 "driver_specific": {} 00:20:38.188 } 00:20:38.188 ] 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.188 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.446 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.446 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.446 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.446 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.447 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.447 "name": "Existed_Raid", 00:20:38.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.447 "strip_size_kb": 0, 00:20:38.447 "state": "configuring", 00:20:38.447 "raid_level": "raid1", 00:20:38.447 "superblock": false, 00:20:38.447 "num_base_bdevs": 3, 00:20:38.447 "num_base_bdevs_discovered": 2, 00:20:38.447 "num_base_bdevs_operational": 3, 00:20:38.447 "base_bdevs_list": [ 00:20:38.447 { 00:20:38.447 "name": "BaseBdev1", 00:20:38.447 "uuid": "28dbd92c-abac-4bbc-a7ea-95e0c55265a8", 00:20:38.447 "is_configured": true, 00:20:38.447 "data_offset": 0, 00:20:38.447 "data_size": 65536 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "name": "BaseBdev2", 00:20:38.447 "uuid": "71919860-f549-488d-a3d5-4c152caf0b6b", 00:20:38.447 "is_configured": true, 00:20:38.447 "data_offset": 0, 00:20:38.447 "data_size": 65536 00:20:38.447 }, 00:20:38.447 { 00:20:38.447 "name": "BaseBdev3", 00:20:38.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.447 "is_configured": false, 00:20:38.447 "data_offset": 0, 00:20:38.447 "data_size": 0 00:20:38.447 } 00:20:38.447 ] 00:20:38.447 }' 00:20:38.447 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.447 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.013 18:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:39.269 [2024-07-25 18:47:39.682306] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.269 [2024-07-25 18:47:39.682382] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:20:39.269 [2024-07-25 18:47:39.682391] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:39.269 [2024-07-25 18:47:39.682507] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:39.269 [2024-07-25 18:47:39.682876] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:20:39.269 [2024-07-25 18:47:39.682886] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:20:39.269 [2024-07-25 18:47:39.683165] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.269 BaseBdev3 00:20:39.269 18:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:39.269 18:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:39.269 18:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:39.269 18:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:39.269 18:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:39.269 18:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:39.269 18:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.526 18:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:39.526 [ 00:20:39.526 { 00:20:39.526 "name": "BaseBdev3", 00:20:39.526 "aliases": [ 00:20:39.526 "cef5477b-1393-4c6f-a74b-6a5efec76543" 00:20:39.526 ], 00:20:39.526 "product_name": "Malloc disk", 00:20:39.526 "block_size": 512, 00:20:39.526 "num_blocks": 65536, 00:20:39.526 "uuid": "cef5477b-1393-4c6f-a74b-6a5efec76543", 00:20:39.526 "assigned_rate_limits": { 00:20:39.526 "rw_ios_per_sec": 0, 00:20:39.526 "rw_mbytes_per_sec": 0, 00:20:39.526 "r_mbytes_per_sec": 0, 00:20:39.526 "w_mbytes_per_sec": 0 00:20:39.526 }, 00:20:39.526 "claimed": true, 00:20:39.526 "claim_type": "exclusive_write", 00:20:39.526 "zoned": false, 00:20:39.526 "supported_io_types": { 00:20:39.526 "read": true, 00:20:39.526 "write": true, 00:20:39.526 "unmap": true, 00:20:39.526 "flush": true, 00:20:39.526 "reset": true, 00:20:39.526 "nvme_admin": false, 00:20:39.526 "nvme_io": false, 00:20:39.526 "nvme_io_md": false, 00:20:39.526 "write_zeroes": true, 00:20:39.526 "zcopy": true, 00:20:39.526 "get_zone_info": false, 00:20:39.526 "zone_management": false, 00:20:39.526 "zone_append": false, 00:20:39.526 "compare": false, 00:20:39.526 "compare_and_write": false, 00:20:39.526 "abort": true, 00:20:39.526 "seek_hole": false, 00:20:39.526 "seek_data": false, 00:20:39.526 "copy": true, 00:20:39.526 "nvme_iov_md": false 00:20:39.526 }, 00:20:39.526 "memory_domains": [ 00:20:39.526 { 00:20:39.526 "dma_device_id": "system", 00:20:39.526 "dma_device_type": 1 00:20:39.526 }, 00:20:39.526 { 00:20:39.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.526 "dma_device_type": 2 00:20:39.526 } 00:20:39.526 ], 00:20:39.526 "driver_specific": {} 00:20:39.526 } 00:20:39.526 ] 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.526 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.783 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.783 "name": "Existed_Raid", 00:20:39.783 "uuid": "28a4b98c-e3ed-4c65-9e0c-6a253f5a055a", 00:20:39.783 "strip_size_kb": 0, 00:20:39.783 "state": "online", 00:20:39.783 "raid_level": "raid1", 00:20:39.783 "superblock": false, 00:20:39.783 "num_base_bdevs": 3, 00:20:39.783 "num_base_bdevs_discovered": 3, 00:20:39.783 "num_base_bdevs_operational": 3, 00:20:39.783 "base_bdevs_list": [ 00:20:39.783 { 00:20:39.783 "name": "BaseBdev1", 00:20:39.783 "uuid": "28dbd92c-abac-4bbc-a7ea-95e0c55265a8", 00:20:39.783 "is_configured": true, 00:20:39.783 "data_offset": 0, 00:20:39.783 "data_size": 65536 00:20:39.783 }, 00:20:39.783 { 00:20:39.783 "name": "BaseBdev2", 00:20:39.783 "uuid": "71919860-f549-488d-a3d5-4c152caf0b6b", 00:20:39.783 "is_configured": true, 00:20:39.783 "data_offset": 0, 00:20:39.783 "data_size": 65536 00:20:39.783 }, 00:20:39.783 { 00:20:39.783 "name": "BaseBdev3", 00:20:39.783 "uuid": "cef5477b-1393-4c6f-a74b-6a5efec76543", 00:20:39.783 "is_configured": true, 00:20:39.783 "data_offset": 0, 00:20:39.783 "data_size": 65536 00:20:39.783 } 00:20:39.783 ] 00:20:39.783 }' 00:20:39.783 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.783 18:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:40.349 18:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:40.640 [2024-07-25 18:47:41.046874] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.640 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:40.640 "name": "Existed_Raid", 00:20:40.640 "aliases": [ 00:20:40.640 "28a4b98c-e3ed-4c65-9e0c-6a253f5a055a" 00:20:40.640 ], 00:20:40.640 "product_name": "Raid Volume", 00:20:40.640 "block_size": 512, 00:20:40.640 "num_blocks": 65536, 00:20:40.640 "uuid": "28a4b98c-e3ed-4c65-9e0c-6a253f5a055a", 00:20:40.640 "assigned_rate_limits": { 00:20:40.640 "rw_ios_per_sec": 0, 00:20:40.640 "rw_mbytes_per_sec": 0, 00:20:40.640 "r_mbytes_per_sec": 0, 00:20:40.640 "w_mbytes_per_sec": 0 00:20:40.640 }, 00:20:40.640 "claimed": false, 00:20:40.640 "zoned": false, 00:20:40.640 "supported_io_types": { 00:20:40.640 "read": true, 00:20:40.640 "write": true, 00:20:40.640 "unmap": false, 00:20:40.640 "flush": false, 00:20:40.640 "reset": true, 00:20:40.640 "nvme_admin": false, 00:20:40.640 "nvme_io": false, 00:20:40.640 "nvme_io_md": false, 00:20:40.640 "write_zeroes": true, 00:20:40.640 "zcopy": false, 00:20:40.640 "get_zone_info": false, 00:20:40.640 "zone_management": false, 00:20:40.640 "zone_append": false, 00:20:40.640 "compare": false, 00:20:40.640 "compare_and_write": false, 00:20:40.640 "abort": false, 00:20:40.640 "seek_hole": false, 00:20:40.640 "seek_data": false, 00:20:40.640 "copy": false, 00:20:40.640 "nvme_iov_md": false 00:20:40.640 }, 00:20:40.640 "memory_domains": [ 00:20:40.640 { 00:20:40.640 "dma_device_id": "system", 00:20:40.640 "dma_device_type": 1 00:20:40.640 }, 00:20:40.640 { 00:20:40.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.640 "dma_device_type": 2 00:20:40.640 }, 00:20:40.640 { 00:20:40.640 "dma_device_id": "system", 00:20:40.640 "dma_device_type": 1 00:20:40.640 }, 00:20:40.640 { 00:20:40.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.640 "dma_device_type": 2 00:20:40.640 }, 00:20:40.640 { 00:20:40.640 "dma_device_id": "system", 00:20:40.640 "dma_device_type": 1 00:20:40.640 }, 00:20:40.640 { 00:20:40.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.640 "dma_device_type": 2 00:20:40.640 } 00:20:40.640 ], 00:20:40.640 "driver_specific": { 00:20:40.640 "raid": { 00:20:40.640 "uuid": "28a4b98c-e3ed-4c65-9e0c-6a253f5a055a", 00:20:40.640 "strip_size_kb": 0, 00:20:40.640 "state": "online", 00:20:40.640 "raid_level": "raid1", 00:20:40.640 "superblock": false, 00:20:40.640 "num_base_bdevs": 3, 00:20:40.640 "num_base_bdevs_discovered": 3, 00:20:40.640 "num_base_bdevs_operational": 3, 00:20:40.640 "base_bdevs_list": [ 00:20:40.640 { 00:20:40.640 "name": "BaseBdev1", 00:20:40.640 "uuid": "28dbd92c-abac-4bbc-a7ea-95e0c55265a8", 00:20:40.640 "is_configured": true, 00:20:40.640 "data_offset": 0, 00:20:40.640 "data_size": 65536 00:20:40.640 }, 00:20:40.640 { 00:20:40.640 "name": "BaseBdev2", 00:20:40.640 "uuid": "71919860-f549-488d-a3d5-4c152caf0b6b", 00:20:40.640 "is_configured": true, 00:20:40.640 "data_offset": 0, 00:20:40.640 "data_size": 65536 00:20:40.640 }, 00:20:40.640 { 00:20:40.640 "name": "BaseBdev3", 00:20:40.640 "uuid": "cef5477b-1393-4c6f-a74b-6a5efec76543", 00:20:40.640 "is_configured": true, 00:20:40.640 "data_offset": 0, 00:20:40.640 "data_size": 65536 00:20:40.640 } 00:20:40.640 ] 00:20:40.640 } 00:20:40.640 } 00:20:40.640 }' 00:20:40.640 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:40.640 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:40.640 BaseBdev2 00:20:40.640 BaseBdev3' 00:20:40.640 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.640 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:40.640 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.898 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.898 "name": "BaseBdev1", 00:20:40.898 "aliases": [ 00:20:40.898 "28dbd92c-abac-4bbc-a7ea-95e0c55265a8" 00:20:40.898 ], 00:20:40.898 "product_name": "Malloc disk", 00:20:40.898 "block_size": 512, 00:20:40.898 "num_blocks": 65536, 00:20:40.898 "uuid": "28dbd92c-abac-4bbc-a7ea-95e0c55265a8", 00:20:40.898 "assigned_rate_limits": { 00:20:40.898 "rw_ios_per_sec": 0, 00:20:40.898 "rw_mbytes_per_sec": 0, 00:20:40.898 "r_mbytes_per_sec": 0, 00:20:40.898 "w_mbytes_per_sec": 0 00:20:40.898 }, 00:20:40.898 "claimed": true, 00:20:40.898 "claim_type": "exclusive_write", 00:20:40.898 "zoned": false, 00:20:40.898 "supported_io_types": { 00:20:40.898 "read": true, 00:20:40.898 "write": true, 00:20:40.898 "unmap": true, 00:20:40.898 "flush": true, 00:20:40.898 "reset": true, 00:20:40.898 "nvme_admin": false, 00:20:40.898 "nvme_io": false, 00:20:40.898 "nvme_io_md": false, 00:20:40.898 "write_zeroes": true, 00:20:40.898 "zcopy": true, 00:20:40.898 "get_zone_info": false, 00:20:40.898 "zone_management": false, 00:20:40.898 "zone_append": false, 00:20:40.898 "compare": false, 00:20:40.898 "compare_and_write": false, 00:20:40.898 "abort": true, 00:20:40.898 "seek_hole": false, 00:20:40.898 "seek_data": false, 00:20:40.898 "copy": true, 00:20:40.898 "nvme_iov_md": false 00:20:40.898 }, 00:20:40.898 "memory_domains": [ 00:20:40.898 { 00:20:40.898 "dma_device_id": "system", 00:20:40.898 "dma_device_type": 1 00:20:40.898 }, 00:20:40.898 { 00:20:40.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.898 "dma_device_type": 2 00:20:40.898 } 00:20:40.898 ], 00:20:40.898 "driver_specific": {} 00:20:40.898 }' 00:20:40.898 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.898 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.898 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.898 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:41.156 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:41.414 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:41.672 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:41.672 "name": "BaseBdev2", 00:20:41.672 "aliases": [ 00:20:41.672 "71919860-f549-488d-a3d5-4c152caf0b6b" 00:20:41.672 ], 00:20:41.672 "product_name": "Malloc disk", 00:20:41.672 "block_size": 512, 00:20:41.672 "num_blocks": 65536, 00:20:41.672 "uuid": "71919860-f549-488d-a3d5-4c152caf0b6b", 00:20:41.672 "assigned_rate_limits": { 00:20:41.672 "rw_ios_per_sec": 0, 00:20:41.672 "rw_mbytes_per_sec": 0, 00:20:41.672 "r_mbytes_per_sec": 0, 00:20:41.672 "w_mbytes_per_sec": 0 00:20:41.672 }, 00:20:41.672 "claimed": true, 00:20:41.672 "claim_type": "exclusive_write", 00:20:41.672 "zoned": false, 00:20:41.672 "supported_io_types": { 00:20:41.672 "read": true, 00:20:41.672 "write": true, 00:20:41.672 "unmap": true, 00:20:41.672 "flush": true, 00:20:41.672 "reset": true, 00:20:41.672 "nvme_admin": false, 00:20:41.672 "nvme_io": false, 00:20:41.672 "nvme_io_md": false, 00:20:41.672 "write_zeroes": true, 00:20:41.672 "zcopy": true, 00:20:41.672 "get_zone_info": false, 00:20:41.672 "zone_management": false, 00:20:41.672 "zone_append": false, 00:20:41.672 "compare": false, 00:20:41.672 "compare_and_write": false, 00:20:41.672 "abort": true, 00:20:41.672 "seek_hole": false, 00:20:41.672 "seek_data": false, 00:20:41.672 "copy": true, 00:20:41.672 "nvme_iov_md": false 00:20:41.672 }, 00:20:41.672 "memory_domains": [ 00:20:41.672 { 00:20:41.672 "dma_device_id": "system", 00:20:41.672 "dma_device_type": 1 00:20:41.672 }, 00:20:41.672 { 00:20:41.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.672 "dma_device_type": 2 00:20:41.672 } 00:20:41.672 ], 00:20:41.672 "driver_specific": {} 00:20:41.672 }' 00:20:41.672 18:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:41.672 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.930 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.930 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.930 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:41.930 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:41.930 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:41.930 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:41.930 "name": "BaseBdev3", 00:20:41.930 "aliases": [ 00:20:41.930 "cef5477b-1393-4c6f-a74b-6a5efec76543" 00:20:41.930 ], 00:20:41.930 "product_name": "Malloc disk", 00:20:41.930 "block_size": 512, 00:20:41.930 "num_blocks": 65536, 00:20:41.930 "uuid": "cef5477b-1393-4c6f-a74b-6a5efec76543", 00:20:41.930 "assigned_rate_limits": { 00:20:41.930 "rw_ios_per_sec": 0, 00:20:41.930 "rw_mbytes_per_sec": 0, 00:20:41.930 "r_mbytes_per_sec": 0, 00:20:41.930 "w_mbytes_per_sec": 0 00:20:41.930 }, 00:20:41.930 "claimed": true, 00:20:41.930 "claim_type": "exclusive_write", 00:20:41.930 "zoned": false, 00:20:41.930 "supported_io_types": { 00:20:41.930 "read": true, 00:20:41.930 "write": true, 00:20:41.930 "unmap": true, 00:20:41.930 "flush": true, 00:20:41.930 "reset": true, 00:20:41.930 "nvme_admin": false, 00:20:41.930 "nvme_io": false, 00:20:41.930 "nvme_io_md": false, 00:20:41.930 "write_zeroes": true, 00:20:41.930 "zcopy": true, 00:20:41.930 "get_zone_info": false, 00:20:41.930 "zone_management": false, 00:20:41.930 "zone_append": false, 00:20:41.930 "compare": false, 00:20:41.930 "compare_and_write": false, 00:20:41.930 "abort": true, 00:20:41.930 "seek_hole": false, 00:20:41.931 "seek_data": false, 00:20:41.931 "copy": true, 00:20:41.931 "nvme_iov_md": false 00:20:41.931 }, 00:20:41.931 "memory_domains": [ 00:20:41.931 { 00:20:41.931 "dma_device_id": "system", 00:20:41.931 "dma_device_type": 1 00:20:41.931 }, 00:20:41.931 { 00:20:41.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.931 "dma_device_type": 2 00:20:41.931 } 00:20:41.931 ], 00:20:41.931 "driver_specific": {} 00:20:41.931 }' 00:20:41.931 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:42.189 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.447 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:42.447 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:42.447 18:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:42.705 [2024-07-25 18:47:43.097695] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.705 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.962 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.962 "name": "Existed_Raid", 00:20:42.963 "uuid": "28a4b98c-e3ed-4c65-9e0c-6a253f5a055a", 00:20:42.963 "strip_size_kb": 0, 00:20:42.963 "state": "online", 00:20:42.963 "raid_level": "raid1", 00:20:42.963 "superblock": false, 00:20:42.963 "num_base_bdevs": 3, 00:20:42.963 "num_base_bdevs_discovered": 2, 00:20:42.963 "num_base_bdevs_operational": 2, 00:20:42.963 "base_bdevs_list": [ 00:20:42.963 { 00:20:42.963 "name": null, 00:20:42.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.963 "is_configured": false, 00:20:42.963 "data_offset": 0, 00:20:42.963 "data_size": 65536 00:20:42.963 }, 00:20:42.963 { 00:20:42.963 "name": "BaseBdev2", 00:20:42.963 "uuid": "71919860-f549-488d-a3d5-4c152caf0b6b", 00:20:42.963 "is_configured": true, 00:20:42.963 "data_offset": 0, 00:20:42.963 "data_size": 65536 00:20:42.963 }, 00:20:42.963 { 00:20:42.963 "name": "BaseBdev3", 00:20:42.963 "uuid": "cef5477b-1393-4c6f-a74b-6a5efec76543", 00:20:42.963 "is_configured": true, 00:20:42.963 "data_offset": 0, 00:20:42.963 "data_size": 65536 00:20:42.963 } 00:20:42.963 ] 00:20:42.963 }' 00:20:42.963 18:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.963 18:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.527 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:43.527 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:43.527 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.527 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:43.785 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:43.785 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:43.785 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:44.043 [2024-07-25 18:47:44.526244] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:44.301 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:44.301 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:44.301 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:44.301 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.301 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:44.301 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:44.301 18:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:44.558 [2024-07-25 18:47:45.039293] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:44.558 [2024-07-25 18:47:45.039412] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.558 [2024-07-25 18:47:45.125862] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.558 [2024-07-25 18:47:45.125921] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.558 [2024-07-25 18:47:45.125947] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.816 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:45.074 BaseBdev2 00:20:45.074 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:45.074 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:45.074 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:45.074 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:45.074 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:45.074 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:45.074 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.332 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:45.590 [ 00:20:45.590 { 00:20:45.590 "name": "BaseBdev2", 00:20:45.590 "aliases": [ 00:20:45.590 "69b16178-5115-4db0-8c5a-acc83c3bea5e" 00:20:45.590 ], 00:20:45.590 "product_name": "Malloc disk", 00:20:45.590 "block_size": 512, 00:20:45.590 "num_blocks": 65536, 00:20:45.590 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:45.590 "assigned_rate_limits": { 00:20:45.590 "rw_ios_per_sec": 0, 00:20:45.590 "rw_mbytes_per_sec": 0, 00:20:45.590 "r_mbytes_per_sec": 0, 00:20:45.590 "w_mbytes_per_sec": 0 00:20:45.590 }, 00:20:45.590 "claimed": false, 00:20:45.590 "zoned": false, 00:20:45.590 "supported_io_types": { 00:20:45.590 "read": true, 00:20:45.590 "write": true, 00:20:45.590 "unmap": true, 00:20:45.590 "flush": true, 00:20:45.590 "reset": true, 00:20:45.590 "nvme_admin": false, 00:20:45.590 "nvme_io": false, 00:20:45.590 "nvme_io_md": false, 00:20:45.590 "write_zeroes": true, 00:20:45.590 "zcopy": true, 00:20:45.590 "get_zone_info": false, 00:20:45.590 "zone_management": false, 00:20:45.590 "zone_append": false, 00:20:45.590 "compare": false, 00:20:45.590 "compare_and_write": false, 00:20:45.590 "abort": true, 00:20:45.590 "seek_hole": false, 00:20:45.590 "seek_data": false, 00:20:45.590 "copy": true, 00:20:45.590 "nvme_iov_md": false 00:20:45.590 }, 00:20:45.590 "memory_domains": [ 00:20:45.590 { 00:20:45.590 "dma_device_id": "system", 00:20:45.590 "dma_device_type": 1 00:20:45.590 }, 00:20:45.590 { 00:20:45.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.590 "dma_device_type": 2 00:20:45.590 } 00:20:45.590 ], 00:20:45.590 "driver_specific": {} 00:20:45.590 } 00:20:45.590 ] 00:20:45.590 18:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:45.590 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:45.590 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:45.590 18:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:45.590 BaseBdev3 00:20:45.590 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:45.590 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:45.590 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:45.590 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:45.590 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:45.590 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:45.590 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.848 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:46.106 [ 00:20:46.106 { 00:20:46.106 "name": "BaseBdev3", 00:20:46.106 "aliases": [ 00:20:46.106 "efd374fb-fbd6-40d8-a3bb-5b373da6b26c" 00:20:46.106 ], 00:20:46.106 "product_name": "Malloc disk", 00:20:46.106 "block_size": 512, 00:20:46.106 "num_blocks": 65536, 00:20:46.106 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:46.106 "assigned_rate_limits": { 00:20:46.106 "rw_ios_per_sec": 0, 00:20:46.106 "rw_mbytes_per_sec": 0, 00:20:46.106 "r_mbytes_per_sec": 0, 00:20:46.106 "w_mbytes_per_sec": 0 00:20:46.106 }, 00:20:46.106 "claimed": false, 00:20:46.106 "zoned": false, 00:20:46.106 "supported_io_types": { 00:20:46.106 "read": true, 00:20:46.106 "write": true, 00:20:46.106 "unmap": true, 00:20:46.106 "flush": true, 00:20:46.106 "reset": true, 00:20:46.106 "nvme_admin": false, 00:20:46.106 "nvme_io": false, 00:20:46.106 "nvme_io_md": false, 00:20:46.106 "write_zeroes": true, 00:20:46.106 "zcopy": true, 00:20:46.106 "get_zone_info": false, 00:20:46.106 "zone_management": false, 00:20:46.106 "zone_append": false, 00:20:46.106 "compare": false, 00:20:46.106 "compare_and_write": false, 00:20:46.106 "abort": true, 00:20:46.106 "seek_hole": false, 00:20:46.106 "seek_data": false, 00:20:46.106 "copy": true, 00:20:46.106 "nvme_iov_md": false 00:20:46.106 }, 00:20:46.106 "memory_domains": [ 00:20:46.106 { 00:20:46.106 "dma_device_id": "system", 00:20:46.106 "dma_device_type": 1 00:20:46.106 }, 00:20:46.106 { 00:20:46.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.106 "dma_device_type": 2 00:20:46.106 } 00:20:46.106 ], 00:20:46.106 "driver_specific": {} 00:20:46.106 } 00:20:46.106 ] 00:20:46.107 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:46.107 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:46.107 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:46.107 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:46.107 [2024-07-25 18:47:46.672562] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:46.107 [2024-07-25 18:47:46.672660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:46.107 [2024-07-25 18:47:46.672683] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.107 [2024-07-25 18:47:46.675018] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.365 "name": "Existed_Raid", 00:20:46.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.365 "strip_size_kb": 0, 00:20:46.365 "state": "configuring", 00:20:46.365 "raid_level": "raid1", 00:20:46.365 "superblock": false, 00:20:46.365 "num_base_bdevs": 3, 00:20:46.365 "num_base_bdevs_discovered": 2, 00:20:46.365 "num_base_bdevs_operational": 3, 00:20:46.365 "base_bdevs_list": [ 00:20:46.365 { 00:20:46.365 "name": "BaseBdev1", 00:20:46.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.365 "is_configured": false, 00:20:46.365 "data_offset": 0, 00:20:46.365 "data_size": 0 00:20:46.365 }, 00:20:46.365 { 00:20:46.365 "name": "BaseBdev2", 00:20:46.365 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:46.365 "is_configured": true, 00:20:46.365 "data_offset": 0, 00:20:46.365 "data_size": 65536 00:20:46.365 }, 00:20:46.365 { 00:20:46.365 "name": "BaseBdev3", 00:20:46.365 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:46.365 "is_configured": true, 00:20:46.365 "data_offset": 0, 00:20:46.365 "data_size": 65536 00:20:46.365 } 00:20:46.365 ] 00:20:46.365 }' 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.365 18:47:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.931 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:47.189 [2024-07-25 18:47:47.588714] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.189 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.447 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.447 "name": "Existed_Raid", 00:20:47.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.447 "strip_size_kb": 0, 00:20:47.447 "state": "configuring", 00:20:47.447 "raid_level": "raid1", 00:20:47.447 "superblock": false, 00:20:47.447 "num_base_bdevs": 3, 00:20:47.447 "num_base_bdevs_discovered": 1, 00:20:47.447 "num_base_bdevs_operational": 3, 00:20:47.447 "base_bdevs_list": [ 00:20:47.447 { 00:20:47.447 "name": "BaseBdev1", 00:20:47.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.447 "is_configured": false, 00:20:47.447 "data_offset": 0, 00:20:47.447 "data_size": 0 00:20:47.447 }, 00:20:47.447 { 00:20:47.447 "name": null, 00:20:47.447 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:47.447 "is_configured": false, 00:20:47.447 "data_offset": 0, 00:20:47.447 "data_size": 65536 00:20:47.447 }, 00:20:47.447 { 00:20:47.447 "name": "BaseBdev3", 00:20:47.447 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:47.447 "is_configured": true, 00:20:47.447 "data_offset": 0, 00:20:47.447 "data_size": 65536 00:20:47.447 } 00:20:47.447 ] 00:20:47.447 }' 00:20:47.447 18:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.447 18:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.013 18:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.013 18:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:48.271 18:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:48.271 18:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:48.529 [2024-07-25 18:47:48.926169] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.529 BaseBdev1 00:20:48.529 18:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:48.529 18:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:48.529 18:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:48.529 18:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:48.529 18:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:48.529 18:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:48.529 18:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.786 18:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:48.786 [ 00:20:48.786 { 00:20:48.786 "name": "BaseBdev1", 00:20:48.786 "aliases": [ 00:20:48.786 "0b741821-48e0-47b3-b650-e783e82fdcac" 00:20:48.786 ], 00:20:48.786 "product_name": "Malloc disk", 00:20:48.786 "block_size": 512, 00:20:48.786 "num_blocks": 65536, 00:20:48.786 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:48.786 "assigned_rate_limits": { 00:20:48.786 "rw_ios_per_sec": 0, 00:20:48.786 "rw_mbytes_per_sec": 0, 00:20:48.786 "r_mbytes_per_sec": 0, 00:20:48.786 "w_mbytes_per_sec": 0 00:20:48.786 }, 00:20:48.786 "claimed": true, 00:20:48.786 "claim_type": "exclusive_write", 00:20:48.786 "zoned": false, 00:20:48.786 "supported_io_types": { 00:20:48.786 "read": true, 00:20:48.786 "write": true, 00:20:48.786 "unmap": true, 00:20:48.786 "flush": true, 00:20:48.786 "reset": true, 00:20:48.786 "nvme_admin": false, 00:20:48.786 "nvme_io": false, 00:20:48.786 "nvme_io_md": false, 00:20:48.786 "write_zeroes": true, 00:20:48.786 "zcopy": true, 00:20:48.786 "get_zone_info": false, 00:20:48.786 "zone_management": false, 00:20:48.786 "zone_append": false, 00:20:48.786 "compare": false, 00:20:48.786 "compare_and_write": false, 00:20:48.786 "abort": true, 00:20:48.786 "seek_hole": false, 00:20:48.786 "seek_data": false, 00:20:48.786 "copy": true, 00:20:48.786 "nvme_iov_md": false 00:20:48.786 }, 00:20:48.786 "memory_domains": [ 00:20:48.786 { 00:20:48.786 "dma_device_id": "system", 00:20:48.786 "dma_device_type": 1 00:20:48.786 }, 00:20:48.786 { 00:20:48.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.786 "dma_device_type": 2 00:20:48.786 } 00:20:48.786 ], 00:20:48.786 "driver_specific": {} 00:20:48.786 } 00:20:48.786 ] 00:20:48.786 18:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:48.786 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:48.786 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.787 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.044 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.044 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.044 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.044 "name": "Existed_Raid", 00:20:49.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.044 "strip_size_kb": 0, 00:20:49.044 "state": "configuring", 00:20:49.044 "raid_level": "raid1", 00:20:49.044 "superblock": false, 00:20:49.044 "num_base_bdevs": 3, 00:20:49.044 "num_base_bdevs_discovered": 2, 00:20:49.044 "num_base_bdevs_operational": 3, 00:20:49.044 "base_bdevs_list": [ 00:20:49.044 { 00:20:49.044 "name": "BaseBdev1", 00:20:49.044 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:49.044 "is_configured": true, 00:20:49.044 "data_offset": 0, 00:20:49.044 "data_size": 65536 00:20:49.044 }, 00:20:49.044 { 00:20:49.044 "name": null, 00:20:49.044 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:49.044 "is_configured": false, 00:20:49.044 "data_offset": 0, 00:20:49.044 "data_size": 65536 00:20:49.044 }, 00:20:49.044 { 00:20:49.044 "name": "BaseBdev3", 00:20:49.044 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:49.044 "is_configured": true, 00:20:49.044 "data_offset": 0, 00:20:49.044 "data_size": 65536 00:20:49.044 } 00:20:49.044 ] 00:20:49.044 }' 00:20:49.044 18:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.044 18:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.638 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.638 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.926 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:49.926 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:50.183 [2024-07-25 18:47:50.556267] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.183 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.184 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.184 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.441 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.442 "name": "Existed_Raid", 00:20:50.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.442 "strip_size_kb": 0, 00:20:50.442 "state": "configuring", 00:20:50.442 "raid_level": "raid1", 00:20:50.442 "superblock": false, 00:20:50.442 "num_base_bdevs": 3, 00:20:50.442 "num_base_bdevs_discovered": 1, 00:20:50.442 "num_base_bdevs_operational": 3, 00:20:50.442 "base_bdevs_list": [ 00:20:50.442 { 00:20:50.442 "name": "BaseBdev1", 00:20:50.442 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:50.442 "is_configured": true, 00:20:50.442 "data_offset": 0, 00:20:50.442 "data_size": 65536 00:20:50.442 }, 00:20:50.442 { 00:20:50.442 "name": null, 00:20:50.442 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:50.442 "is_configured": false, 00:20:50.442 "data_offset": 0, 00:20:50.442 "data_size": 65536 00:20:50.442 }, 00:20:50.442 { 00:20:50.442 "name": null, 00:20:50.442 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:50.442 "is_configured": false, 00:20:50.442 "data_offset": 0, 00:20:50.442 "data_size": 65536 00:20:50.442 } 00:20:50.442 ] 00:20:50.442 }' 00:20:50.442 18:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.442 18:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.092 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.092 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:51.349 [2024-07-25 18:47:51.892504] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.349 18:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.607 18:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.607 "name": "Existed_Raid", 00:20:51.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.607 "strip_size_kb": 0, 00:20:51.607 "state": "configuring", 00:20:51.607 "raid_level": "raid1", 00:20:51.607 "superblock": false, 00:20:51.607 "num_base_bdevs": 3, 00:20:51.607 "num_base_bdevs_discovered": 2, 00:20:51.607 "num_base_bdevs_operational": 3, 00:20:51.607 "base_bdevs_list": [ 00:20:51.607 { 00:20:51.607 "name": "BaseBdev1", 00:20:51.607 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:51.607 "is_configured": true, 00:20:51.607 "data_offset": 0, 00:20:51.607 "data_size": 65536 00:20:51.607 }, 00:20:51.607 { 00:20:51.607 "name": null, 00:20:51.607 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:51.607 "is_configured": false, 00:20:51.607 "data_offset": 0, 00:20:51.607 "data_size": 65536 00:20:51.607 }, 00:20:51.607 { 00:20:51.607 "name": "BaseBdev3", 00:20:51.607 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:51.607 "is_configured": true, 00:20:51.607 "data_offset": 0, 00:20:51.607 "data_size": 65536 00:20:51.607 } 00:20:51.607 ] 00:20:51.607 }' 00:20:51.607 18:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.607 18:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.541 18:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.541 18:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:52.541 18:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:52.541 18:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:52.799 [2024-07-25 18:47:53.208780] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.799 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.057 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.057 "name": "Existed_Raid", 00:20:53.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.057 "strip_size_kb": 0, 00:20:53.057 "state": "configuring", 00:20:53.057 "raid_level": "raid1", 00:20:53.057 "superblock": false, 00:20:53.057 "num_base_bdevs": 3, 00:20:53.057 "num_base_bdevs_discovered": 1, 00:20:53.057 "num_base_bdevs_operational": 3, 00:20:53.057 "base_bdevs_list": [ 00:20:53.057 { 00:20:53.057 "name": null, 00:20:53.057 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:53.057 "is_configured": false, 00:20:53.057 "data_offset": 0, 00:20:53.057 "data_size": 65536 00:20:53.057 }, 00:20:53.057 { 00:20:53.057 "name": null, 00:20:53.057 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:53.057 "is_configured": false, 00:20:53.057 "data_offset": 0, 00:20:53.057 "data_size": 65536 00:20:53.057 }, 00:20:53.057 { 00:20:53.057 "name": "BaseBdev3", 00:20:53.057 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:53.057 "is_configured": true, 00:20:53.057 "data_offset": 0, 00:20:53.057 "data_size": 65536 00:20:53.057 } 00:20:53.057 ] 00:20:53.057 }' 00:20:53.057 18:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.057 18:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.624 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.624 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:53.883 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:53.883 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:54.141 [2024-07-25 18:47:54.597151] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.141 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.408 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.408 "name": "Existed_Raid", 00:20:54.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.408 "strip_size_kb": 0, 00:20:54.408 "state": "configuring", 00:20:54.408 "raid_level": "raid1", 00:20:54.408 "superblock": false, 00:20:54.408 "num_base_bdevs": 3, 00:20:54.408 "num_base_bdevs_discovered": 2, 00:20:54.408 "num_base_bdevs_operational": 3, 00:20:54.408 "base_bdevs_list": [ 00:20:54.408 { 00:20:54.408 "name": null, 00:20:54.408 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:54.408 "is_configured": false, 00:20:54.408 "data_offset": 0, 00:20:54.408 "data_size": 65536 00:20:54.408 }, 00:20:54.408 { 00:20:54.408 "name": "BaseBdev2", 00:20:54.408 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:54.408 "is_configured": true, 00:20:54.408 "data_offset": 0, 00:20:54.408 "data_size": 65536 00:20:54.408 }, 00:20:54.408 { 00:20:54.408 "name": "BaseBdev3", 00:20:54.408 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:54.408 "is_configured": true, 00:20:54.408 "data_offset": 0, 00:20:54.408 "data_size": 65536 00:20:54.408 } 00:20:54.408 ] 00:20:54.408 }' 00:20:54.408 18:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.408 18:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.974 18:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.974 18:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:55.232 18:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:55.232 18:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:55.232 18:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.489 18:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0b741821-48e0-47b3-b650-e783e82fdcac 00:20:55.746 [2024-07-25 18:47:56.210298] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:55.746 [2024-07-25 18:47:56.210619] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:20:55.746 [2024-07-25 18:47:56.210659] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:55.746 [2024-07-25 18:47:56.210891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:55.746 [2024-07-25 18:47:56.211434] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:20:55.746 [2024-07-25 18:47:56.211544] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:20:55.746 [2024-07-25 18:47:56.211882] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.746 NewBaseBdev 00:20:55.746 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:55.746 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:55.746 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:55.746 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:55.746 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:55.746 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:55.746 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:56.002 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:56.259 [ 00:20:56.259 { 00:20:56.259 "name": "NewBaseBdev", 00:20:56.259 "aliases": [ 00:20:56.259 "0b741821-48e0-47b3-b650-e783e82fdcac" 00:20:56.259 ], 00:20:56.259 "product_name": "Malloc disk", 00:20:56.259 "block_size": 512, 00:20:56.259 "num_blocks": 65536, 00:20:56.259 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:56.259 "assigned_rate_limits": { 00:20:56.259 "rw_ios_per_sec": 0, 00:20:56.259 "rw_mbytes_per_sec": 0, 00:20:56.259 "r_mbytes_per_sec": 0, 00:20:56.259 "w_mbytes_per_sec": 0 00:20:56.259 }, 00:20:56.259 "claimed": true, 00:20:56.259 "claim_type": "exclusive_write", 00:20:56.259 "zoned": false, 00:20:56.259 "supported_io_types": { 00:20:56.259 "read": true, 00:20:56.259 "write": true, 00:20:56.259 "unmap": true, 00:20:56.259 "flush": true, 00:20:56.259 "reset": true, 00:20:56.259 "nvme_admin": false, 00:20:56.259 "nvme_io": false, 00:20:56.259 "nvme_io_md": false, 00:20:56.259 "write_zeroes": true, 00:20:56.259 "zcopy": true, 00:20:56.259 "get_zone_info": false, 00:20:56.259 "zone_management": false, 00:20:56.259 "zone_append": false, 00:20:56.259 "compare": false, 00:20:56.259 "compare_and_write": false, 00:20:56.259 "abort": true, 00:20:56.259 "seek_hole": false, 00:20:56.259 "seek_data": false, 00:20:56.259 "copy": true, 00:20:56.259 "nvme_iov_md": false 00:20:56.259 }, 00:20:56.259 "memory_domains": [ 00:20:56.259 { 00:20:56.259 "dma_device_id": "system", 00:20:56.259 "dma_device_type": 1 00:20:56.259 }, 00:20:56.259 { 00:20:56.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.259 "dma_device_type": 2 00:20:56.259 } 00:20:56.259 ], 00:20:56.259 "driver_specific": {} 00:20:56.259 } 00:20:56.259 ] 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:56.259 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.260 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.260 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.260 "name": "Existed_Raid", 00:20:56.260 "uuid": "c0e36575-9605-4a03-bde4-4e39165649f3", 00:20:56.260 "strip_size_kb": 0, 00:20:56.260 "state": "online", 00:20:56.260 "raid_level": "raid1", 00:20:56.260 "superblock": false, 00:20:56.260 "num_base_bdevs": 3, 00:20:56.260 "num_base_bdevs_discovered": 3, 00:20:56.260 "num_base_bdevs_operational": 3, 00:20:56.260 "base_bdevs_list": [ 00:20:56.260 { 00:20:56.260 "name": "NewBaseBdev", 00:20:56.260 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:56.260 "is_configured": true, 00:20:56.260 "data_offset": 0, 00:20:56.260 "data_size": 65536 00:20:56.260 }, 00:20:56.260 { 00:20:56.260 "name": "BaseBdev2", 00:20:56.260 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:56.260 "is_configured": true, 00:20:56.260 "data_offset": 0, 00:20:56.260 "data_size": 65536 00:20:56.260 }, 00:20:56.260 { 00:20:56.260 "name": "BaseBdev3", 00:20:56.260 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:56.260 "is_configured": true, 00:20:56.260 "data_offset": 0, 00:20:56.260 "data_size": 65536 00:20:56.260 } 00:20:56.260 ] 00:20:56.260 }' 00:20:56.260 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.260 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:56.824 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:57.082 [2024-07-25 18:47:57.626870] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.082 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:57.082 "name": "Existed_Raid", 00:20:57.082 "aliases": [ 00:20:57.082 "c0e36575-9605-4a03-bde4-4e39165649f3" 00:20:57.082 ], 00:20:57.082 "product_name": "Raid Volume", 00:20:57.082 "block_size": 512, 00:20:57.082 "num_blocks": 65536, 00:20:57.082 "uuid": "c0e36575-9605-4a03-bde4-4e39165649f3", 00:20:57.082 "assigned_rate_limits": { 00:20:57.082 "rw_ios_per_sec": 0, 00:20:57.082 "rw_mbytes_per_sec": 0, 00:20:57.082 "r_mbytes_per_sec": 0, 00:20:57.082 "w_mbytes_per_sec": 0 00:20:57.082 }, 00:20:57.082 "claimed": false, 00:20:57.082 "zoned": false, 00:20:57.082 "supported_io_types": { 00:20:57.082 "read": true, 00:20:57.082 "write": true, 00:20:57.082 "unmap": false, 00:20:57.082 "flush": false, 00:20:57.082 "reset": true, 00:20:57.082 "nvme_admin": false, 00:20:57.082 "nvme_io": false, 00:20:57.082 "nvme_io_md": false, 00:20:57.082 "write_zeroes": true, 00:20:57.082 "zcopy": false, 00:20:57.082 "get_zone_info": false, 00:20:57.082 "zone_management": false, 00:20:57.082 "zone_append": false, 00:20:57.082 "compare": false, 00:20:57.082 "compare_and_write": false, 00:20:57.082 "abort": false, 00:20:57.082 "seek_hole": false, 00:20:57.082 "seek_data": false, 00:20:57.082 "copy": false, 00:20:57.082 "nvme_iov_md": false 00:20:57.082 }, 00:20:57.082 "memory_domains": [ 00:20:57.082 { 00:20:57.082 "dma_device_id": "system", 00:20:57.082 "dma_device_type": 1 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.082 "dma_device_type": 2 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "dma_device_id": "system", 00:20:57.082 "dma_device_type": 1 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.082 "dma_device_type": 2 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "dma_device_id": "system", 00:20:57.082 "dma_device_type": 1 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.082 "dma_device_type": 2 00:20:57.082 } 00:20:57.082 ], 00:20:57.082 "driver_specific": { 00:20:57.082 "raid": { 00:20:57.082 "uuid": "c0e36575-9605-4a03-bde4-4e39165649f3", 00:20:57.082 "strip_size_kb": 0, 00:20:57.082 "state": "online", 00:20:57.082 "raid_level": "raid1", 00:20:57.082 "superblock": false, 00:20:57.082 "num_base_bdevs": 3, 00:20:57.082 "num_base_bdevs_discovered": 3, 00:20:57.082 "num_base_bdevs_operational": 3, 00:20:57.082 "base_bdevs_list": [ 00:20:57.082 { 00:20:57.082 "name": "NewBaseBdev", 00:20:57.082 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:57.082 "is_configured": true, 00:20:57.082 "data_offset": 0, 00:20:57.082 "data_size": 65536 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "name": "BaseBdev2", 00:20:57.082 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:57.082 "is_configured": true, 00:20:57.082 "data_offset": 0, 00:20:57.082 "data_size": 65536 00:20:57.082 }, 00:20:57.082 { 00:20:57.082 "name": "BaseBdev3", 00:20:57.082 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:57.082 "is_configured": true, 00:20:57.082 "data_offset": 0, 00:20:57.082 "data_size": 65536 00:20:57.082 } 00:20:57.082 ] 00:20:57.082 } 00:20:57.082 } 00:20:57.082 }' 00:20:57.082 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:57.339 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:57.339 BaseBdev2 00:20:57.339 BaseBdev3' 00:20:57.339 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.339 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:57.339 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.595 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.596 "name": "NewBaseBdev", 00:20:57.596 "aliases": [ 00:20:57.596 "0b741821-48e0-47b3-b650-e783e82fdcac" 00:20:57.596 ], 00:20:57.596 "product_name": "Malloc disk", 00:20:57.596 "block_size": 512, 00:20:57.596 "num_blocks": 65536, 00:20:57.596 "uuid": "0b741821-48e0-47b3-b650-e783e82fdcac", 00:20:57.596 "assigned_rate_limits": { 00:20:57.596 "rw_ios_per_sec": 0, 00:20:57.596 "rw_mbytes_per_sec": 0, 00:20:57.596 "r_mbytes_per_sec": 0, 00:20:57.596 "w_mbytes_per_sec": 0 00:20:57.596 }, 00:20:57.596 "claimed": true, 00:20:57.596 "claim_type": "exclusive_write", 00:20:57.596 "zoned": false, 00:20:57.596 "supported_io_types": { 00:20:57.596 "read": true, 00:20:57.596 "write": true, 00:20:57.596 "unmap": true, 00:20:57.596 "flush": true, 00:20:57.596 "reset": true, 00:20:57.596 "nvme_admin": false, 00:20:57.596 "nvme_io": false, 00:20:57.596 "nvme_io_md": false, 00:20:57.596 "write_zeroes": true, 00:20:57.596 "zcopy": true, 00:20:57.596 "get_zone_info": false, 00:20:57.596 "zone_management": false, 00:20:57.596 "zone_append": false, 00:20:57.596 "compare": false, 00:20:57.596 "compare_and_write": false, 00:20:57.596 "abort": true, 00:20:57.596 "seek_hole": false, 00:20:57.596 "seek_data": false, 00:20:57.596 "copy": true, 00:20:57.596 "nvme_iov_md": false 00:20:57.596 }, 00:20:57.596 "memory_domains": [ 00:20:57.596 { 00:20:57.596 "dma_device_id": "system", 00:20:57.596 "dma_device_type": 1 00:20:57.596 }, 00:20:57.596 { 00:20:57.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.596 "dma_device_type": 2 00:20:57.596 } 00:20:57.596 ], 00:20:57.596 "driver_specific": {} 00:20:57.596 }' 00:20:57.596 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.596 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.596 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.596 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.596 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.596 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.596 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.596 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.853 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.853 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.853 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.853 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.853 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.853 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:57.853 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.110 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.110 "name": "BaseBdev2", 00:20:58.110 "aliases": [ 00:20:58.110 "69b16178-5115-4db0-8c5a-acc83c3bea5e" 00:20:58.110 ], 00:20:58.110 "product_name": "Malloc disk", 00:20:58.110 "block_size": 512, 00:20:58.110 "num_blocks": 65536, 00:20:58.110 "uuid": "69b16178-5115-4db0-8c5a-acc83c3bea5e", 00:20:58.110 "assigned_rate_limits": { 00:20:58.110 "rw_ios_per_sec": 0, 00:20:58.110 "rw_mbytes_per_sec": 0, 00:20:58.110 "r_mbytes_per_sec": 0, 00:20:58.110 "w_mbytes_per_sec": 0 00:20:58.110 }, 00:20:58.110 "claimed": true, 00:20:58.110 "claim_type": "exclusive_write", 00:20:58.110 "zoned": false, 00:20:58.110 "supported_io_types": { 00:20:58.110 "read": true, 00:20:58.110 "write": true, 00:20:58.110 "unmap": true, 00:20:58.110 "flush": true, 00:20:58.110 "reset": true, 00:20:58.110 "nvme_admin": false, 00:20:58.110 "nvme_io": false, 00:20:58.110 "nvme_io_md": false, 00:20:58.110 "write_zeroes": true, 00:20:58.110 "zcopy": true, 00:20:58.110 "get_zone_info": false, 00:20:58.110 "zone_management": false, 00:20:58.110 "zone_append": false, 00:20:58.110 "compare": false, 00:20:58.110 "compare_and_write": false, 00:20:58.110 "abort": true, 00:20:58.110 "seek_hole": false, 00:20:58.110 "seek_data": false, 00:20:58.110 "copy": true, 00:20:58.110 "nvme_iov_md": false 00:20:58.110 }, 00:20:58.110 "memory_domains": [ 00:20:58.110 { 00:20:58.110 "dma_device_id": "system", 00:20:58.110 "dma_device_type": 1 00:20:58.110 }, 00:20:58.110 { 00:20:58.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.110 "dma_device_type": 2 00:20:58.110 } 00:20:58.110 ], 00:20:58.110 "driver_specific": {} 00:20:58.110 }' 00:20:58.111 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.111 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.111 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.111 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.111 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:58.368 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.625 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.625 "name": "BaseBdev3", 00:20:58.625 "aliases": [ 00:20:58.625 "efd374fb-fbd6-40d8-a3bb-5b373da6b26c" 00:20:58.625 ], 00:20:58.625 "product_name": "Malloc disk", 00:20:58.625 "block_size": 512, 00:20:58.625 "num_blocks": 65536, 00:20:58.625 "uuid": "efd374fb-fbd6-40d8-a3bb-5b373da6b26c", 00:20:58.625 "assigned_rate_limits": { 00:20:58.625 "rw_ios_per_sec": 0, 00:20:58.625 "rw_mbytes_per_sec": 0, 00:20:58.625 "r_mbytes_per_sec": 0, 00:20:58.625 "w_mbytes_per_sec": 0 00:20:58.625 }, 00:20:58.625 "claimed": true, 00:20:58.625 "claim_type": "exclusive_write", 00:20:58.625 "zoned": false, 00:20:58.625 "supported_io_types": { 00:20:58.625 "read": true, 00:20:58.625 "write": true, 00:20:58.625 "unmap": true, 00:20:58.625 "flush": true, 00:20:58.625 "reset": true, 00:20:58.625 "nvme_admin": false, 00:20:58.625 "nvme_io": false, 00:20:58.625 "nvme_io_md": false, 00:20:58.625 "write_zeroes": true, 00:20:58.625 "zcopy": true, 00:20:58.625 "get_zone_info": false, 00:20:58.625 "zone_management": false, 00:20:58.625 "zone_append": false, 00:20:58.625 "compare": false, 00:20:58.625 "compare_and_write": false, 00:20:58.625 "abort": true, 00:20:58.625 "seek_hole": false, 00:20:58.625 "seek_data": false, 00:20:58.625 "copy": true, 00:20:58.625 "nvme_iov_md": false 00:20:58.625 }, 00:20:58.625 "memory_domains": [ 00:20:58.625 { 00:20:58.625 "dma_device_id": "system", 00:20:58.625 "dma_device_type": 1 00:20:58.625 }, 00:20:58.625 { 00:20:58.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.625 "dma_device_type": 2 00:20:58.625 } 00:20:58.625 ], 00:20:58.625 "driver_specific": {} 00:20:58.625 }' 00:20:58.625 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.625 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.883 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.141 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.141 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:59.141 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:59.399 [2024-07-25 18:47:59.774209] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.399 [2024-07-25 18:47:59.774416] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.399 [2024-07-25 18:47:59.774637] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.399 [2024-07-25 18:47:59.774977] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.399 [2024-07-25 18:47:59.774990] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 130625 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 130625 ']' 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 130625 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 130625 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 130625' 00:20:59.400 killing process with pid 130625 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 130625 00:20:59.400 [2024-07-25 18:47:59.829317] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.400 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 130625 00:20:59.657 [2024-07-25 18:48:00.081789] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.031 ************************************ 00:21:01.031 END TEST raid_state_function_test 00:21:01.031 ************************************ 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:01.031 00:21:01.031 real 0m28.316s 00:21:01.031 user 0m50.459s 00:21:01.031 sys 0m4.833s 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.031 18:48:01 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:21:01.031 18:48:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:01.031 18:48:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:01.031 18:48:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:01.031 ************************************ 00:21:01.031 START TEST raid_state_function_test_sb 00:21:01.031 ************************************ 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=131583 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131583' 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:01.031 Process raid pid: 131583 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 131583 /var/tmp/spdk-raid.sock 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 131583 ']' 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.031 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.031 [2024-07-25 18:48:01.459805] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:01.031 [2024-07-25 18:48:01.460267] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.289 [2024-07-25 18:48:01.646315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.289 [2024-07-25 18:48:01.856681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.547 [2024-07-25 18:48:02.050008] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:02.112 [2024-07-25 18:48:02.635610] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:02.112 [2024-07-25 18:48:02.635877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:02.112 [2024-07-25 18:48:02.636011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.112 [2024-07-25 18:48:02.636125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.112 [2024-07-25 18:48:02.636193] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:02.112 [2024-07-25 18:48:02.636240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.112 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.370 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.370 "name": "Existed_Raid", 00:21:02.370 "uuid": "8e52959f-cc21-4bc6-ad50-bd21628e615b", 00:21:02.370 "strip_size_kb": 0, 00:21:02.370 "state": "configuring", 00:21:02.370 "raid_level": "raid1", 00:21:02.370 "superblock": true, 00:21:02.370 "num_base_bdevs": 3, 00:21:02.370 "num_base_bdevs_discovered": 0, 00:21:02.370 "num_base_bdevs_operational": 3, 00:21:02.370 "base_bdevs_list": [ 00:21:02.370 { 00:21:02.370 "name": "BaseBdev1", 00:21:02.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.370 "is_configured": false, 00:21:02.370 "data_offset": 0, 00:21:02.370 "data_size": 0 00:21:02.370 }, 00:21:02.370 { 00:21:02.370 "name": "BaseBdev2", 00:21:02.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.370 "is_configured": false, 00:21:02.370 "data_offset": 0, 00:21:02.370 "data_size": 0 00:21:02.370 }, 00:21:02.370 { 00:21:02.370 "name": "BaseBdev3", 00:21:02.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.370 "is_configured": false, 00:21:02.370 "data_offset": 0, 00:21:02.370 "data_size": 0 00:21:02.370 } 00:21:02.370 ] 00:21:02.370 }' 00:21:02.370 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.370 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.936 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:03.193 [2024-07-25 18:48:03.559653] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:03.193 [2024-07-25 18:48:03.559840] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:21:03.193 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:03.451 [2024-07-25 18:48:03.827731] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:03.451 [2024-07-25 18:48:03.827993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:03.451 [2024-07-25 18:48:03.828102] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:03.451 [2024-07-25 18:48:03.828156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:03.451 [2024-07-25 18:48:03.828221] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:03.451 [2024-07-25 18:48:03.828273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:03.451 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:03.708 [2024-07-25 18:48:04.038197] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.708 BaseBdev1 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:03.708 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:03.966 [ 00:21:03.966 { 00:21:03.966 "name": "BaseBdev1", 00:21:03.966 "aliases": [ 00:21:03.966 "2b82ae5f-12d8-4125-acd1-1a538baf739c" 00:21:03.966 ], 00:21:03.966 "product_name": "Malloc disk", 00:21:03.966 "block_size": 512, 00:21:03.966 "num_blocks": 65536, 00:21:03.966 "uuid": "2b82ae5f-12d8-4125-acd1-1a538baf739c", 00:21:03.966 "assigned_rate_limits": { 00:21:03.966 "rw_ios_per_sec": 0, 00:21:03.966 "rw_mbytes_per_sec": 0, 00:21:03.966 "r_mbytes_per_sec": 0, 00:21:03.966 "w_mbytes_per_sec": 0 00:21:03.966 }, 00:21:03.966 "claimed": true, 00:21:03.966 "claim_type": "exclusive_write", 00:21:03.966 "zoned": false, 00:21:03.966 "supported_io_types": { 00:21:03.966 "read": true, 00:21:03.966 "write": true, 00:21:03.966 "unmap": true, 00:21:03.966 "flush": true, 00:21:03.966 "reset": true, 00:21:03.966 "nvme_admin": false, 00:21:03.966 "nvme_io": false, 00:21:03.966 "nvme_io_md": false, 00:21:03.966 "write_zeroes": true, 00:21:03.966 "zcopy": true, 00:21:03.966 "get_zone_info": false, 00:21:03.966 "zone_management": false, 00:21:03.966 "zone_append": false, 00:21:03.966 "compare": false, 00:21:03.966 "compare_and_write": false, 00:21:03.966 "abort": true, 00:21:03.966 "seek_hole": false, 00:21:03.966 "seek_data": false, 00:21:03.966 "copy": true, 00:21:03.966 "nvme_iov_md": false 00:21:03.966 }, 00:21:03.966 "memory_domains": [ 00:21:03.966 { 00:21:03.966 "dma_device_id": "system", 00:21:03.966 "dma_device_type": 1 00:21:03.966 }, 00:21:03.966 { 00:21:03.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.966 "dma_device_type": 2 00:21:03.966 } 00:21:03.966 ], 00:21:03.966 "driver_specific": {} 00:21:03.966 } 00:21:03.966 ] 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.966 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.224 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.224 "name": "Existed_Raid", 00:21:04.224 "uuid": "8ec6aec2-86f9-4913-8c80-48e6c240620d", 00:21:04.224 "strip_size_kb": 0, 00:21:04.224 "state": "configuring", 00:21:04.224 "raid_level": "raid1", 00:21:04.224 "superblock": true, 00:21:04.224 "num_base_bdevs": 3, 00:21:04.224 "num_base_bdevs_discovered": 1, 00:21:04.224 "num_base_bdevs_operational": 3, 00:21:04.224 "base_bdevs_list": [ 00:21:04.224 { 00:21:04.224 "name": "BaseBdev1", 00:21:04.224 "uuid": "2b82ae5f-12d8-4125-acd1-1a538baf739c", 00:21:04.224 "is_configured": true, 00:21:04.224 "data_offset": 2048, 00:21:04.224 "data_size": 63488 00:21:04.224 }, 00:21:04.224 { 00:21:04.224 "name": "BaseBdev2", 00:21:04.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.224 "is_configured": false, 00:21:04.224 "data_offset": 0, 00:21:04.224 "data_size": 0 00:21:04.224 }, 00:21:04.224 { 00:21:04.224 "name": "BaseBdev3", 00:21:04.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.224 "is_configured": false, 00:21:04.224 "data_offset": 0, 00:21:04.224 "data_size": 0 00:21:04.224 } 00:21:04.224 ] 00:21:04.224 }' 00:21:04.224 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.224 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.790 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:05.047 [2024-07-25 18:48:05.526474] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:05.047 [2024-07-25 18:48:05.526670] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:21:05.047 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:05.305 [2024-07-25 18:48:05.798572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.305 [2024-07-25 18:48:05.800963] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:05.305 [2024-07-25 18:48:05.801177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:05.305 [2024-07-25 18:48:05.801272] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:05.305 [2024-07-25 18:48:05.801353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.305 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.563 18:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.563 "name": "Existed_Raid", 00:21:05.563 "uuid": "f32aaf6f-1c2a-4222-8659-9797284d458b", 00:21:05.563 "strip_size_kb": 0, 00:21:05.563 "state": "configuring", 00:21:05.563 "raid_level": "raid1", 00:21:05.563 "superblock": true, 00:21:05.563 "num_base_bdevs": 3, 00:21:05.563 "num_base_bdevs_discovered": 1, 00:21:05.563 "num_base_bdevs_operational": 3, 00:21:05.563 "base_bdevs_list": [ 00:21:05.563 { 00:21:05.563 "name": "BaseBdev1", 00:21:05.563 "uuid": "2b82ae5f-12d8-4125-acd1-1a538baf739c", 00:21:05.563 "is_configured": true, 00:21:05.563 "data_offset": 2048, 00:21:05.563 "data_size": 63488 00:21:05.563 }, 00:21:05.563 { 00:21:05.563 "name": "BaseBdev2", 00:21:05.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.563 "is_configured": false, 00:21:05.563 "data_offset": 0, 00:21:05.563 "data_size": 0 00:21:05.563 }, 00:21:05.563 { 00:21:05.563 "name": "BaseBdev3", 00:21:05.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.563 "is_configured": false, 00:21:05.563 "data_offset": 0, 00:21:05.563 "data_size": 0 00:21:05.563 } 00:21:05.563 ] 00:21:05.563 }' 00:21:05.563 18:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.563 18:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.128 18:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:06.385 [2024-07-25 18:48:06.831841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.385 BaseBdev2 00:21:06.385 18:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:06.385 18:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:06.385 18:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:06.385 18:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:06.385 18:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:06.385 18:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:06.385 18:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:06.643 18:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:06.900 [ 00:21:06.900 { 00:21:06.900 "name": "BaseBdev2", 00:21:06.900 "aliases": [ 00:21:06.900 "056f7ff2-9516-46b3-b14a-060edfdccf12" 00:21:06.900 ], 00:21:06.900 "product_name": "Malloc disk", 00:21:06.900 "block_size": 512, 00:21:06.900 "num_blocks": 65536, 00:21:06.900 "uuid": "056f7ff2-9516-46b3-b14a-060edfdccf12", 00:21:06.900 "assigned_rate_limits": { 00:21:06.901 "rw_ios_per_sec": 0, 00:21:06.901 "rw_mbytes_per_sec": 0, 00:21:06.901 "r_mbytes_per_sec": 0, 00:21:06.901 "w_mbytes_per_sec": 0 00:21:06.901 }, 00:21:06.901 "claimed": true, 00:21:06.901 "claim_type": "exclusive_write", 00:21:06.901 "zoned": false, 00:21:06.901 "supported_io_types": { 00:21:06.901 "read": true, 00:21:06.901 "write": true, 00:21:06.901 "unmap": true, 00:21:06.901 "flush": true, 00:21:06.901 "reset": true, 00:21:06.901 "nvme_admin": false, 00:21:06.901 "nvme_io": false, 00:21:06.901 "nvme_io_md": false, 00:21:06.901 "write_zeroes": true, 00:21:06.901 "zcopy": true, 00:21:06.901 "get_zone_info": false, 00:21:06.901 "zone_management": false, 00:21:06.901 "zone_append": false, 00:21:06.901 "compare": false, 00:21:06.901 "compare_and_write": false, 00:21:06.901 "abort": true, 00:21:06.901 "seek_hole": false, 00:21:06.901 "seek_data": false, 00:21:06.901 "copy": true, 00:21:06.901 "nvme_iov_md": false 00:21:06.901 }, 00:21:06.901 "memory_domains": [ 00:21:06.901 { 00:21:06.901 "dma_device_id": "system", 00:21:06.901 "dma_device_type": 1 00:21:06.901 }, 00:21:06.901 { 00:21:06.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.901 "dma_device_type": 2 00:21:06.901 } 00:21:06.901 ], 00:21:06.901 "driver_specific": {} 00:21:06.901 } 00:21:06.901 ] 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.901 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.158 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.158 "name": "Existed_Raid", 00:21:07.158 "uuid": "f32aaf6f-1c2a-4222-8659-9797284d458b", 00:21:07.158 "strip_size_kb": 0, 00:21:07.158 "state": "configuring", 00:21:07.158 "raid_level": "raid1", 00:21:07.158 "superblock": true, 00:21:07.158 "num_base_bdevs": 3, 00:21:07.158 "num_base_bdevs_discovered": 2, 00:21:07.158 "num_base_bdevs_operational": 3, 00:21:07.158 "base_bdevs_list": [ 00:21:07.158 { 00:21:07.158 "name": "BaseBdev1", 00:21:07.158 "uuid": "2b82ae5f-12d8-4125-acd1-1a538baf739c", 00:21:07.158 "is_configured": true, 00:21:07.158 "data_offset": 2048, 00:21:07.158 "data_size": 63488 00:21:07.158 }, 00:21:07.158 { 00:21:07.158 "name": "BaseBdev2", 00:21:07.158 "uuid": "056f7ff2-9516-46b3-b14a-060edfdccf12", 00:21:07.158 "is_configured": true, 00:21:07.158 "data_offset": 2048, 00:21:07.158 "data_size": 63488 00:21:07.158 }, 00:21:07.158 { 00:21:07.158 "name": "BaseBdev3", 00:21:07.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.158 "is_configured": false, 00:21:07.158 "data_offset": 0, 00:21:07.158 "data_size": 0 00:21:07.159 } 00:21:07.159 ] 00:21:07.159 }' 00:21:07.159 18:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.159 18:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:07.725 [2024-07-25 18:48:08.272193] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.725 [2024-07-25 18:48:08.272720] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:21:07.725 [2024-07-25 18:48:08.272842] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:07.725 [2024-07-25 18:48:08.272995] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:07.725 [2024-07-25 18:48:08.273512] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:21:07.725 [2024-07-25 18:48:08.273630] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:21:07.725 BaseBdev3 00:21:07.725 [2024-07-25 18:48:08.273878] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:07.725 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.982 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:08.240 [ 00:21:08.240 { 00:21:08.240 "name": "BaseBdev3", 00:21:08.240 "aliases": [ 00:21:08.240 "234faf85-fc77-4f72-853f-cd347f2bad0e" 00:21:08.240 ], 00:21:08.240 "product_name": "Malloc disk", 00:21:08.240 "block_size": 512, 00:21:08.240 "num_blocks": 65536, 00:21:08.240 "uuid": "234faf85-fc77-4f72-853f-cd347f2bad0e", 00:21:08.240 "assigned_rate_limits": { 00:21:08.240 "rw_ios_per_sec": 0, 00:21:08.240 "rw_mbytes_per_sec": 0, 00:21:08.240 "r_mbytes_per_sec": 0, 00:21:08.240 "w_mbytes_per_sec": 0 00:21:08.240 }, 00:21:08.240 "claimed": true, 00:21:08.240 "claim_type": "exclusive_write", 00:21:08.240 "zoned": false, 00:21:08.240 "supported_io_types": { 00:21:08.240 "read": true, 00:21:08.240 "write": true, 00:21:08.240 "unmap": true, 00:21:08.240 "flush": true, 00:21:08.240 "reset": true, 00:21:08.240 "nvme_admin": false, 00:21:08.240 "nvme_io": false, 00:21:08.240 "nvme_io_md": false, 00:21:08.240 "write_zeroes": true, 00:21:08.240 "zcopy": true, 00:21:08.240 "get_zone_info": false, 00:21:08.240 "zone_management": false, 00:21:08.240 "zone_append": false, 00:21:08.240 "compare": false, 00:21:08.240 "compare_and_write": false, 00:21:08.240 "abort": true, 00:21:08.240 "seek_hole": false, 00:21:08.240 "seek_data": false, 00:21:08.240 "copy": true, 00:21:08.240 "nvme_iov_md": false 00:21:08.240 }, 00:21:08.240 "memory_domains": [ 00:21:08.240 { 00:21:08.240 "dma_device_id": "system", 00:21:08.240 "dma_device_type": 1 00:21:08.240 }, 00:21:08.240 { 00:21:08.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.240 "dma_device_type": 2 00:21:08.240 } 00:21:08.240 ], 00:21:08.240 "driver_specific": {} 00:21:08.240 } 00:21:08.240 ] 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.240 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.498 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.498 "name": "Existed_Raid", 00:21:08.498 "uuid": "f32aaf6f-1c2a-4222-8659-9797284d458b", 00:21:08.498 "strip_size_kb": 0, 00:21:08.498 "state": "online", 00:21:08.498 "raid_level": "raid1", 00:21:08.498 "superblock": true, 00:21:08.498 "num_base_bdevs": 3, 00:21:08.498 "num_base_bdevs_discovered": 3, 00:21:08.498 "num_base_bdevs_operational": 3, 00:21:08.498 "base_bdevs_list": [ 00:21:08.498 { 00:21:08.498 "name": "BaseBdev1", 00:21:08.498 "uuid": "2b82ae5f-12d8-4125-acd1-1a538baf739c", 00:21:08.498 "is_configured": true, 00:21:08.498 "data_offset": 2048, 00:21:08.498 "data_size": 63488 00:21:08.498 }, 00:21:08.498 { 00:21:08.498 "name": "BaseBdev2", 00:21:08.498 "uuid": "056f7ff2-9516-46b3-b14a-060edfdccf12", 00:21:08.498 "is_configured": true, 00:21:08.498 "data_offset": 2048, 00:21:08.498 "data_size": 63488 00:21:08.498 }, 00:21:08.498 { 00:21:08.498 "name": "BaseBdev3", 00:21:08.498 "uuid": "234faf85-fc77-4f72-853f-cd347f2bad0e", 00:21:08.498 "is_configured": true, 00:21:08.498 "data_offset": 2048, 00:21:08.498 "data_size": 63488 00:21:08.498 } 00:21:08.498 ] 00:21:08.498 }' 00:21:08.498 18:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.498 18:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:09.064 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:09.322 [2024-07-25 18:48:09.816744] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:09.322 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:09.322 "name": "Existed_Raid", 00:21:09.322 "aliases": [ 00:21:09.322 "f32aaf6f-1c2a-4222-8659-9797284d458b" 00:21:09.322 ], 00:21:09.322 "product_name": "Raid Volume", 00:21:09.322 "block_size": 512, 00:21:09.322 "num_blocks": 63488, 00:21:09.322 "uuid": "f32aaf6f-1c2a-4222-8659-9797284d458b", 00:21:09.322 "assigned_rate_limits": { 00:21:09.322 "rw_ios_per_sec": 0, 00:21:09.322 "rw_mbytes_per_sec": 0, 00:21:09.322 "r_mbytes_per_sec": 0, 00:21:09.322 "w_mbytes_per_sec": 0 00:21:09.322 }, 00:21:09.322 "claimed": false, 00:21:09.322 "zoned": false, 00:21:09.322 "supported_io_types": { 00:21:09.322 "read": true, 00:21:09.322 "write": true, 00:21:09.322 "unmap": false, 00:21:09.322 "flush": false, 00:21:09.322 "reset": true, 00:21:09.322 "nvme_admin": false, 00:21:09.322 "nvme_io": false, 00:21:09.322 "nvme_io_md": false, 00:21:09.322 "write_zeroes": true, 00:21:09.322 "zcopy": false, 00:21:09.322 "get_zone_info": false, 00:21:09.322 "zone_management": false, 00:21:09.322 "zone_append": false, 00:21:09.322 "compare": false, 00:21:09.322 "compare_and_write": false, 00:21:09.322 "abort": false, 00:21:09.322 "seek_hole": false, 00:21:09.322 "seek_data": false, 00:21:09.322 "copy": false, 00:21:09.322 "nvme_iov_md": false 00:21:09.322 }, 00:21:09.322 "memory_domains": [ 00:21:09.322 { 00:21:09.322 "dma_device_id": "system", 00:21:09.322 "dma_device_type": 1 00:21:09.322 }, 00:21:09.322 { 00:21:09.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.322 "dma_device_type": 2 00:21:09.322 }, 00:21:09.322 { 00:21:09.322 "dma_device_id": "system", 00:21:09.322 "dma_device_type": 1 00:21:09.322 }, 00:21:09.322 { 00:21:09.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.322 "dma_device_type": 2 00:21:09.322 }, 00:21:09.322 { 00:21:09.322 "dma_device_id": "system", 00:21:09.322 "dma_device_type": 1 00:21:09.322 }, 00:21:09.322 { 00:21:09.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.322 "dma_device_type": 2 00:21:09.322 } 00:21:09.322 ], 00:21:09.322 "driver_specific": { 00:21:09.322 "raid": { 00:21:09.322 "uuid": "f32aaf6f-1c2a-4222-8659-9797284d458b", 00:21:09.322 "strip_size_kb": 0, 00:21:09.322 "state": "online", 00:21:09.322 "raid_level": "raid1", 00:21:09.322 "superblock": true, 00:21:09.322 "num_base_bdevs": 3, 00:21:09.322 "num_base_bdevs_discovered": 3, 00:21:09.322 "num_base_bdevs_operational": 3, 00:21:09.323 "base_bdevs_list": [ 00:21:09.323 { 00:21:09.323 "name": "BaseBdev1", 00:21:09.323 "uuid": "2b82ae5f-12d8-4125-acd1-1a538baf739c", 00:21:09.323 "is_configured": true, 00:21:09.323 "data_offset": 2048, 00:21:09.323 "data_size": 63488 00:21:09.323 }, 00:21:09.323 { 00:21:09.323 "name": "BaseBdev2", 00:21:09.323 "uuid": "056f7ff2-9516-46b3-b14a-060edfdccf12", 00:21:09.323 "is_configured": true, 00:21:09.323 "data_offset": 2048, 00:21:09.323 "data_size": 63488 00:21:09.323 }, 00:21:09.323 { 00:21:09.323 "name": "BaseBdev3", 00:21:09.323 "uuid": "234faf85-fc77-4f72-853f-cd347f2bad0e", 00:21:09.323 "is_configured": true, 00:21:09.323 "data_offset": 2048, 00:21:09.323 "data_size": 63488 00:21:09.323 } 00:21:09.323 ] 00:21:09.323 } 00:21:09.323 } 00:21:09.323 }' 00:21:09.323 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:09.323 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:09.323 BaseBdev2 00:21:09.323 BaseBdev3' 00:21:09.323 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:09.323 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:09.323 18:48:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:09.580 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:09.580 "name": "BaseBdev1", 00:21:09.580 "aliases": [ 00:21:09.580 "2b82ae5f-12d8-4125-acd1-1a538baf739c" 00:21:09.580 ], 00:21:09.580 "product_name": "Malloc disk", 00:21:09.580 "block_size": 512, 00:21:09.580 "num_blocks": 65536, 00:21:09.580 "uuid": "2b82ae5f-12d8-4125-acd1-1a538baf739c", 00:21:09.580 "assigned_rate_limits": { 00:21:09.580 "rw_ios_per_sec": 0, 00:21:09.580 "rw_mbytes_per_sec": 0, 00:21:09.580 "r_mbytes_per_sec": 0, 00:21:09.580 "w_mbytes_per_sec": 0 00:21:09.580 }, 00:21:09.580 "claimed": true, 00:21:09.580 "claim_type": "exclusive_write", 00:21:09.580 "zoned": false, 00:21:09.580 "supported_io_types": { 00:21:09.580 "read": true, 00:21:09.580 "write": true, 00:21:09.580 "unmap": true, 00:21:09.580 "flush": true, 00:21:09.580 "reset": true, 00:21:09.580 "nvme_admin": false, 00:21:09.580 "nvme_io": false, 00:21:09.580 "nvme_io_md": false, 00:21:09.580 "write_zeroes": true, 00:21:09.580 "zcopy": true, 00:21:09.580 "get_zone_info": false, 00:21:09.580 "zone_management": false, 00:21:09.580 "zone_append": false, 00:21:09.580 "compare": false, 00:21:09.580 "compare_and_write": false, 00:21:09.580 "abort": true, 00:21:09.580 "seek_hole": false, 00:21:09.580 "seek_data": false, 00:21:09.580 "copy": true, 00:21:09.580 "nvme_iov_md": false 00:21:09.580 }, 00:21:09.580 "memory_domains": [ 00:21:09.580 { 00:21:09.580 "dma_device_id": "system", 00:21:09.580 "dma_device_type": 1 00:21:09.580 }, 00:21:09.580 { 00:21:09.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.580 "dma_device_type": 2 00:21:09.580 } 00:21:09.580 ], 00:21:09.580 "driver_specific": {} 00:21:09.580 }' 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.838 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.095 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.095 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.095 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.095 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:10.095 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.353 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.353 "name": "BaseBdev2", 00:21:10.353 "aliases": [ 00:21:10.353 "056f7ff2-9516-46b3-b14a-060edfdccf12" 00:21:10.353 ], 00:21:10.353 "product_name": "Malloc disk", 00:21:10.353 "block_size": 512, 00:21:10.353 "num_blocks": 65536, 00:21:10.353 "uuid": "056f7ff2-9516-46b3-b14a-060edfdccf12", 00:21:10.353 "assigned_rate_limits": { 00:21:10.353 "rw_ios_per_sec": 0, 00:21:10.353 "rw_mbytes_per_sec": 0, 00:21:10.353 "r_mbytes_per_sec": 0, 00:21:10.353 "w_mbytes_per_sec": 0 00:21:10.353 }, 00:21:10.353 "claimed": true, 00:21:10.353 "claim_type": "exclusive_write", 00:21:10.353 "zoned": false, 00:21:10.353 "supported_io_types": { 00:21:10.353 "read": true, 00:21:10.353 "write": true, 00:21:10.353 "unmap": true, 00:21:10.353 "flush": true, 00:21:10.353 "reset": true, 00:21:10.353 "nvme_admin": false, 00:21:10.353 "nvme_io": false, 00:21:10.353 "nvme_io_md": false, 00:21:10.353 "write_zeroes": true, 00:21:10.353 "zcopy": true, 00:21:10.353 "get_zone_info": false, 00:21:10.353 "zone_management": false, 00:21:10.353 "zone_append": false, 00:21:10.353 "compare": false, 00:21:10.353 "compare_and_write": false, 00:21:10.353 "abort": true, 00:21:10.353 "seek_hole": false, 00:21:10.353 "seek_data": false, 00:21:10.353 "copy": true, 00:21:10.353 "nvme_iov_md": false 00:21:10.353 }, 00:21:10.353 "memory_domains": [ 00:21:10.353 { 00:21:10.353 "dma_device_id": "system", 00:21:10.353 "dma_device_type": 1 00:21:10.353 }, 00:21:10.353 { 00:21:10.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.354 "dma_device_type": 2 00:21:10.354 } 00:21:10.354 ], 00:21:10.354 "driver_specific": {} 00:21:10.354 }' 00:21:10.354 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.354 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.354 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.354 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.354 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:10.354 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:10.354 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.611 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:10.611 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:10.611 18:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.611 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:10.611 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:10.611 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.612 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:10.612 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.869 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:10.869 "name": "BaseBdev3", 00:21:10.869 "aliases": [ 00:21:10.869 "234faf85-fc77-4f72-853f-cd347f2bad0e" 00:21:10.869 ], 00:21:10.869 "product_name": "Malloc disk", 00:21:10.869 "block_size": 512, 00:21:10.869 "num_blocks": 65536, 00:21:10.869 "uuid": "234faf85-fc77-4f72-853f-cd347f2bad0e", 00:21:10.869 "assigned_rate_limits": { 00:21:10.869 "rw_ios_per_sec": 0, 00:21:10.869 "rw_mbytes_per_sec": 0, 00:21:10.869 "r_mbytes_per_sec": 0, 00:21:10.869 "w_mbytes_per_sec": 0 00:21:10.869 }, 00:21:10.869 "claimed": true, 00:21:10.869 "claim_type": "exclusive_write", 00:21:10.869 "zoned": false, 00:21:10.869 "supported_io_types": { 00:21:10.869 "read": true, 00:21:10.869 "write": true, 00:21:10.869 "unmap": true, 00:21:10.869 "flush": true, 00:21:10.869 "reset": true, 00:21:10.869 "nvme_admin": false, 00:21:10.869 "nvme_io": false, 00:21:10.869 "nvme_io_md": false, 00:21:10.869 "write_zeroes": true, 00:21:10.869 "zcopy": true, 00:21:10.869 "get_zone_info": false, 00:21:10.869 "zone_management": false, 00:21:10.869 "zone_append": false, 00:21:10.869 "compare": false, 00:21:10.869 "compare_and_write": false, 00:21:10.869 "abort": true, 00:21:10.869 "seek_hole": false, 00:21:10.869 "seek_data": false, 00:21:10.869 "copy": true, 00:21:10.869 "nvme_iov_md": false 00:21:10.869 }, 00:21:10.869 "memory_domains": [ 00:21:10.869 { 00:21:10.870 "dma_device_id": "system", 00:21:10.870 "dma_device_type": 1 00:21:10.870 }, 00:21:10.870 { 00:21:10.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.870 "dma_device_type": 2 00:21:10.870 } 00:21:10.870 ], 00:21:10.870 "driver_specific": {} 00:21:10.870 }' 00:21:10.870 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.870 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:10.870 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:10.870 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.128 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.128 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.128 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.128 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.128 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.128 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.128 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.386 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.386 18:48:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:11.386 [2024-07-25 18:48:11.960941] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.645 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.904 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.904 "name": "Existed_Raid", 00:21:11.904 "uuid": "f32aaf6f-1c2a-4222-8659-9797284d458b", 00:21:11.904 "strip_size_kb": 0, 00:21:11.904 "state": "online", 00:21:11.904 "raid_level": "raid1", 00:21:11.904 "superblock": true, 00:21:11.904 "num_base_bdevs": 3, 00:21:11.904 "num_base_bdevs_discovered": 2, 00:21:11.904 "num_base_bdevs_operational": 2, 00:21:11.904 "base_bdevs_list": [ 00:21:11.904 { 00:21:11.904 "name": null, 00:21:11.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.904 "is_configured": false, 00:21:11.904 "data_offset": 2048, 00:21:11.904 "data_size": 63488 00:21:11.904 }, 00:21:11.904 { 00:21:11.904 "name": "BaseBdev2", 00:21:11.904 "uuid": "056f7ff2-9516-46b3-b14a-060edfdccf12", 00:21:11.904 "is_configured": true, 00:21:11.904 "data_offset": 2048, 00:21:11.904 "data_size": 63488 00:21:11.904 }, 00:21:11.904 { 00:21:11.904 "name": "BaseBdev3", 00:21:11.904 "uuid": "234faf85-fc77-4f72-853f-cd347f2bad0e", 00:21:11.904 "is_configured": true, 00:21:11.904 "data_offset": 2048, 00:21:11.904 "data_size": 63488 00:21:11.904 } 00:21:11.904 ] 00:21:11.904 }' 00:21:11.904 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.904 18:48:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.471 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:12.471 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.471 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.471 18:48:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:12.729 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:12.730 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:12.730 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:12.987 [2024-07-25 18:48:13.414758] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:12.987 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:12.987 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:12.987 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.987 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:13.245 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:13.245 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:13.245 18:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:13.503 [2024-07-25 18:48:13.927427] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:13.503 [2024-07-25 18:48:13.927714] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:13.503 [2024-07-25 18:48:14.016170] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.503 [2024-07-25 18:48:14.016362] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.503 [2024-07-25 18:48:14.016441] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:21:13.504 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:13.504 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:13.504 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.504 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:13.762 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:13.762 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:13.762 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:13.762 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:13.762 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:13.762 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:14.021 BaseBdev2 00:21:14.021 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:14.021 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:14.021 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:14.021 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:14.021 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:14.021 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:14.021 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.287 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:14.287 [ 00:21:14.287 { 00:21:14.287 "name": "BaseBdev2", 00:21:14.287 "aliases": [ 00:21:14.287 "6083daea-c3dd-4a4f-abeb-0b56044dfa8d" 00:21:14.287 ], 00:21:14.287 "product_name": "Malloc disk", 00:21:14.287 "block_size": 512, 00:21:14.288 "num_blocks": 65536, 00:21:14.288 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:14.288 "assigned_rate_limits": { 00:21:14.288 "rw_ios_per_sec": 0, 00:21:14.288 "rw_mbytes_per_sec": 0, 00:21:14.288 "r_mbytes_per_sec": 0, 00:21:14.288 "w_mbytes_per_sec": 0 00:21:14.288 }, 00:21:14.288 "claimed": false, 00:21:14.288 "zoned": false, 00:21:14.288 "supported_io_types": { 00:21:14.288 "read": true, 00:21:14.288 "write": true, 00:21:14.288 "unmap": true, 00:21:14.288 "flush": true, 00:21:14.288 "reset": true, 00:21:14.288 "nvme_admin": false, 00:21:14.288 "nvme_io": false, 00:21:14.288 "nvme_io_md": false, 00:21:14.288 "write_zeroes": true, 00:21:14.288 "zcopy": true, 00:21:14.288 "get_zone_info": false, 00:21:14.288 "zone_management": false, 00:21:14.288 "zone_append": false, 00:21:14.288 "compare": false, 00:21:14.288 "compare_and_write": false, 00:21:14.288 "abort": true, 00:21:14.288 "seek_hole": false, 00:21:14.288 "seek_data": false, 00:21:14.288 "copy": true, 00:21:14.288 "nvme_iov_md": false 00:21:14.288 }, 00:21:14.288 "memory_domains": [ 00:21:14.288 { 00:21:14.288 "dma_device_id": "system", 00:21:14.288 "dma_device_type": 1 00:21:14.288 }, 00:21:14.288 { 00:21:14.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.288 "dma_device_type": 2 00:21:14.288 } 00:21:14.288 ], 00:21:14.288 "driver_specific": {} 00:21:14.288 } 00:21:14.288 ] 00:21:14.557 18:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:14.557 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:14.557 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:14.557 18:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.557 BaseBdev3 00:21:14.557 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:14.557 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:14.557 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:14.557 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:14.557 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:14.557 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:14.557 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:14.815 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:15.073 [ 00:21:15.073 { 00:21:15.073 "name": "BaseBdev3", 00:21:15.073 "aliases": [ 00:21:15.073 "d3e6ff51-d855-473f-b944-66a2356b0759" 00:21:15.073 ], 00:21:15.073 "product_name": "Malloc disk", 00:21:15.073 "block_size": 512, 00:21:15.073 "num_blocks": 65536, 00:21:15.073 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:15.073 "assigned_rate_limits": { 00:21:15.073 "rw_ios_per_sec": 0, 00:21:15.073 "rw_mbytes_per_sec": 0, 00:21:15.073 "r_mbytes_per_sec": 0, 00:21:15.073 "w_mbytes_per_sec": 0 00:21:15.073 }, 00:21:15.073 "claimed": false, 00:21:15.073 "zoned": false, 00:21:15.073 "supported_io_types": { 00:21:15.073 "read": true, 00:21:15.073 "write": true, 00:21:15.073 "unmap": true, 00:21:15.073 "flush": true, 00:21:15.073 "reset": true, 00:21:15.073 "nvme_admin": false, 00:21:15.073 "nvme_io": false, 00:21:15.073 "nvme_io_md": false, 00:21:15.073 "write_zeroes": true, 00:21:15.073 "zcopy": true, 00:21:15.073 "get_zone_info": false, 00:21:15.073 "zone_management": false, 00:21:15.073 "zone_append": false, 00:21:15.073 "compare": false, 00:21:15.073 "compare_and_write": false, 00:21:15.073 "abort": true, 00:21:15.073 "seek_hole": false, 00:21:15.073 "seek_data": false, 00:21:15.073 "copy": true, 00:21:15.073 "nvme_iov_md": false 00:21:15.073 }, 00:21:15.073 "memory_domains": [ 00:21:15.073 { 00:21:15.073 "dma_device_id": "system", 00:21:15.073 "dma_device_type": 1 00:21:15.073 }, 00:21:15.073 { 00:21:15.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.073 "dma_device_type": 2 00:21:15.073 } 00:21:15.073 ], 00:21:15.073 "driver_specific": {} 00:21:15.073 } 00:21:15.073 ] 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:15.073 [2024-07-25 18:48:15.606060] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:15.073 [2024-07-25 18:48:15.606307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:15.073 [2024-07-25 18:48:15.606458] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.073 [2024-07-25 18:48:15.608757] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.073 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.331 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.331 "name": "Existed_Raid", 00:21:15.331 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:15.331 "strip_size_kb": 0, 00:21:15.331 "state": "configuring", 00:21:15.331 "raid_level": "raid1", 00:21:15.331 "superblock": true, 00:21:15.331 "num_base_bdevs": 3, 00:21:15.331 "num_base_bdevs_discovered": 2, 00:21:15.331 "num_base_bdevs_operational": 3, 00:21:15.331 "base_bdevs_list": [ 00:21:15.331 { 00:21:15.331 "name": "BaseBdev1", 00:21:15.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.331 "is_configured": false, 00:21:15.331 "data_offset": 0, 00:21:15.331 "data_size": 0 00:21:15.331 }, 00:21:15.331 { 00:21:15.331 "name": "BaseBdev2", 00:21:15.331 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:15.331 "is_configured": true, 00:21:15.331 "data_offset": 2048, 00:21:15.331 "data_size": 63488 00:21:15.331 }, 00:21:15.331 { 00:21:15.331 "name": "BaseBdev3", 00:21:15.331 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:15.331 "is_configured": true, 00:21:15.331 "data_offset": 2048, 00:21:15.331 "data_size": 63488 00:21:15.331 } 00:21:15.331 ] 00:21:15.331 }' 00:21:15.331 18:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.331 18:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.898 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:16.156 [2024-07-25 18:48:16.614236] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.156 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.414 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.414 "name": "Existed_Raid", 00:21:16.414 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:16.414 "strip_size_kb": 0, 00:21:16.414 "state": "configuring", 00:21:16.414 "raid_level": "raid1", 00:21:16.414 "superblock": true, 00:21:16.414 "num_base_bdevs": 3, 00:21:16.414 "num_base_bdevs_discovered": 1, 00:21:16.414 "num_base_bdevs_operational": 3, 00:21:16.414 "base_bdevs_list": [ 00:21:16.414 { 00:21:16.414 "name": "BaseBdev1", 00:21:16.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.414 "is_configured": false, 00:21:16.414 "data_offset": 0, 00:21:16.414 "data_size": 0 00:21:16.414 }, 00:21:16.414 { 00:21:16.414 "name": null, 00:21:16.414 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:16.414 "is_configured": false, 00:21:16.414 "data_offset": 2048, 00:21:16.414 "data_size": 63488 00:21:16.414 }, 00:21:16.414 { 00:21:16.414 "name": "BaseBdev3", 00:21:16.414 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:16.414 "is_configured": true, 00:21:16.414 "data_offset": 2048, 00:21:16.414 "data_size": 63488 00:21:16.414 } 00:21:16.414 ] 00:21:16.414 }' 00:21:16.414 18:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.414 18:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.981 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.981 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:16.981 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:16.981 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.239 [2024-07-25 18:48:17.728136] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.239 BaseBdev1 00:21:17.239 18:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:17.239 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:17.240 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:17.240 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:17.240 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:17.240 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:17.240 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:17.498 18:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:17.756 [ 00:21:17.756 { 00:21:17.756 "name": "BaseBdev1", 00:21:17.756 "aliases": [ 00:21:17.756 "a7006a93-1a41-40d0-a333-6ca8943cd2a5" 00:21:17.756 ], 00:21:17.756 "product_name": "Malloc disk", 00:21:17.756 "block_size": 512, 00:21:17.756 "num_blocks": 65536, 00:21:17.756 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:17.756 "assigned_rate_limits": { 00:21:17.756 "rw_ios_per_sec": 0, 00:21:17.756 "rw_mbytes_per_sec": 0, 00:21:17.756 "r_mbytes_per_sec": 0, 00:21:17.756 "w_mbytes_per_sec": 0 00:21:17.756 }, 00:21:17.756 "claimed": true, 00:21:17.756 "claim_type": "exclusive_write", 00:21:17.756 "zoned": false, 00:21:17.756 "supported_io_types": { 00:21:17.756 "read": true, 00:21:17.756 "write": true, 00:21:17.756 "unmap": true, 00:21:17.756 "flush": true, 00:21:17.756 "reset": true, 00:21:17.756 "nvme_admin": false, 00:21:17.756 "nvme_io": false, 00:21:17.756 "nvme_io_md": false, 00:21:17.756 "write_zeroes": true, 00:21:17.756 "zcopy": true, 00:21:17.756 "get_zone_info": false, 00:21:17.756 "zone_management": false, 00:21:17.756 "zone_append": false, 00:21:17.756 "compare": false, 00:21:17.756 "compare_and_write": false, 00:21:17.756 "abort": true, 00:21:17.756 "seek_hole": false, 00:21:17.756 "seek_data": false, 00:21:17.756 "copy": true, 00:21:17.756 "nvme_iov_md": false 00:21:17.756 }, 00:21:17.756 "memory_domains": [ 00:21:17.756 { 00:21:17.756 "dma_device_id": "system", 00:21:17.756 "dma_device_type": 1 00:21:17.756 }, 00:21:17.756 { 00:21:17.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.756 "dma_device_type": 2 00:21:17.756 } 00:21:17.756 ], 00:21:17.756 "driver_specific": {} 00:21:17.756 } 00:21:17.756 ] 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.756 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.015 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.015 "name": "Existed_Raid", 00:21:18.015 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:18.015 "strip_size_kb": 0, 00:21:18.015 "state": "configuring", 00:21:18.015 "raid_level": "raid1", 00:21:18.015 "superblock": true, 00:21:18.015 "num_base_bdevs": 3, 00:21:18.015 "num_base_bdevs_discovered": 2, 00:21:18.015 "num_base_bdevs_operational": 3, 00:21:18.015 "base_bdevs_list": [ 00:21:18.015 { 00:21:18.015 "name": "BaseBdev1", 00:21:18.015 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:18.015 "is_configured": true, 00:21:18.015 "data_offset": 2048, 00:21:18.015 "data_size": 63488 00:21:18.015 }, 00:21:18.015 { 00:21:18.015 "name": null, 00:21:18.015 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:18.015 "is_configured": false, 00:21:18.015 "data_offset": 2048, 00:21:18.015 "data_size": 63488 00:21:18.015 }, 00:21:18.015 { 00:21:18.015 "name": "BaseBdev3", 00:21:18.015 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:18.015 "is_configured": true, 00:21:18.015 "data_offset": 2048, 00:21:18.015 "data_size": 63488 00:21:18.015 } 00:21:18.015 ] 00:21:18.015 }' 00:21:18.015 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.015 18:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.582 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.582 18:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:18.840 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:18.840 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:19.098 [2024-07-25 18:48:19.528501] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.098 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.357 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.357 "name": "Existed_Raid", 00:21:19.357 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:19.357 "strip_size_kb": 0, 00:21:19.357 "state": "configuring", 00:21:19.357 "raid_level": "raid1", 00:21:19.357 "superblock": true, 00:21:19.357 "num_base_bdevs": 3, 00:21:19.357 "num_base_bdevs_discovered": 1, 00:21:19.357 "num_base_bdevs_operational": 3, 00:21:19.357 "base_bdevs_list": [ 00:21:19.357 { 00:21:19.357 "name": "BaseBdev1", 00:21:19.357 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:19.357 "is_configured": true, 00:21:19.357 "data_offset": 2048, 00:21:19.357 "data_size": 63488 00:21:19.357 }, 00:21:19.357 { 00:21:19.357 "name": null, 00:21:19.357 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:19.357 "is_configured": false, 00:21:19.357 "data_offset": 2048, 00:21:19.357 "data_size": 63488 00:21:19.357 }, 00:21:19.357 { 00:21:19.357 "name": null, 00:21:19.357 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:19.357 "is_configured": false, 00:21:19.357 "data_offset": 2048, 00:21:19.357 "data_size": 63488 00:21:19.357 } 00:21:19.357 ] 00:21:19.357 }' 00:21:19.357 18:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.357 18:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.923 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.923 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:20.181 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:20.181 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:20.439 [2024-07-25 18:48:20.856738] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.439 18:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.697 18:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.697 "name": "Existed_Raid", 00:21:20.698 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:20.698 "strip_size_kb": 0, 00:21:20.698 "state": "configuring", 00:21:20.698 "raid_level": "raid1", 00:21:20.698 "superblock": true, 00:21:20.698 "num_base_bdevs": 3, 00:21:20.698 "num_base_bdevs_discovered": 2, 00:21:20.698 "num_base_bdevs_operational": 3, 00:21:20.698 "base_bdevs_list": [ 00:21:20.698 { 00:21:20.698 "name": "BaseBdev1", 00:21:20.698 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:20.698 "is_configured": true, 00:21:20.698 "data_offset": 2048, 00:21:20.698 "data_size": 63488 00:21:20.698 }, 00:21:20.698 { 00:21:20.698 "name": null, 00:21:20.698 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:20.698 "is_configured": false, 00:21:20.698 "data_offset": 2048, 00:21:20.698 "data_size": 63488 00:21:20.698 }, 00:21:20.698 { 00:21:20.698 "name": "BaseBdev3", 00:21:20.698 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:20.698 "is_configured": true, 00:21:20.698 "data_offset": 2048, 00:21:20.698 "data_size": 63488 00:21:20.698 } 00:21:20.698 ] 00:21:20.698 }' 00:21:20.698 18:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.698 18:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.264 18:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.264 18:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:21.522 18:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:21.522 18:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:21.780 [2024-07-25 18:48:22.116997] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.780 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.038 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.038 "name": "Existed_Raid", 00:21:22.038 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:22.038 "strip_size_kb": 0, 00:21:22.038 "state": "configuring", 00:21:22.038 "raid_level": "raid1", 00:21:22.038 "superblock": true, 00:21:22.038 "num_base_bdevs": 3, 00:21:22.038 "num_base_bdevs_discovered": 1, 00:21:22.038 "num_base_bdevs_operational": 3, 00:21:22.038 "base_bdevs_list": [ 00:21:22.038 { 00:21:22.038 "name": null, 00:21:22.038 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:22.038 "is_configured": false, 00:21:22.038 "data_offset": 2048, 00:21:22.038 "data_size": 63488 00:21:22.038 }, 00:21:22.038 { 00:21:22.038 "name": null, 00:21:22.038 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:22.038 "is_configured": false, 00:21:22.038 "data_offset": 2048, 00:21:22.038 "data_size": 63488 00:21:22.038 }, 00:21:22.038 { 00:21:22.038 "name": "BaseBdev3", 00:21:22.038 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:22.038 "is_configured": true, 00:21:22.038 "data_offset": 2048, 00:21:22.038 "data_size": 63488 00:21:22.038 } 00:21:22.038 ] 00:21:22.038 }' 00:21:22.038 18:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.038 18:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.605 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.605 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:22.863 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:22.863 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:23.122 [2024-07-25 18:48:23.620741] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.122 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.380 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.380 "name": "Existed_Raid", 00:21:23.380 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:23.380 "strip_size_kb": 0, 00:21:23.380 "state": "configuring", 00:21:23.380 "raid_level": "raid1", 00:21:23.380 "superblock": true, 00:21:23.380 "num_base_bdevs": 3, 00:21:23.380 "num_base_bdevs_discovered": 2, 00:21:23.380 "num_base_bdevs_operational": 3, 00:21:23.380 "base_bdevs_list": [ 00:21:23.380 { 00:21:23.380 "name": null, 00:21:23.380 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:23.380 "is_configured": false, 00:21:23.380 "data_offset": 2048, 00:21:23.380 "data_size": 63488 00:21:23.380 }, 00:21:23.380 { 00:21:23.380 "name": "BaseBdev2", 00:21:23.380 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:23.380 "is_configured": true, 00:21:23.380 "data_offset": 2048, 00:21:23.380 "data_size": 63488 00:21:23.380 }, 00:21:23.380 { 00:21:23.380 "name": "BaseBdev3", 00:21:23.380 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:23.380 "is_configured": true, 00:21:23.380 "data_offset": 2048, 00:21:23.380 "data_size": 63488 00:21:23.380 } 00:21:23.380 ] 00:21:23.380 }' 00:21:23.380 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.380 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.315 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.315 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:24.315 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:24.315 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:24.315 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.572 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a7006a93-1a41-40d0-a333-6ca8943cd2a5 00:21:24.829 [2024-07-25 18:48:25.321851] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:24.829 [2024-07-25 18:48:25.322331] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:21:24.829 [2024-07-25 18:48:25.322464] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:24.829 [2024-07-25 18:48:25.322609] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:24.829 [2024-07-25 18:48:25.323268] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:21:24.829 [2024-07-25 18:48:25.323377] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:21:24.829 [2024-07-25 18:48:25.323591] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.829 NewBaseBdev 00:21:24.829 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:24.829 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:24.829 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:24.829 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:24.829 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:24.829 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:24.829 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.086 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:25.344 [ 00:21:25.344 { 00:21:25.344 "name": "NewBaseBdev", 00:21:25.344 "aliases": [ 00:21:25.344 "a7006a93-1a41-40d0-a333-6ca8943cd2a5" 00:21:25.344 ], 00:21:25.344 "product_name": "Malloc disk", 00:21:25.344 "block_size": 512, 00:21:25.344 "num_blocks": 65536, 00:21:25.344 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:25.344 "assigned_rate_limits": { 00:21:25.344 "rw_ios_per_sec": 0, 00:21:25.344 "rw_mbytes_per_sec": 0, 00:21:25.344 "r_mbytes_per_sec": 0, 00:21:25.344 "w_mbytes_per_sec": 0 00:21:25.344 }, 00:21:25.344 "claimed": true, 00:21:25.344 "claim_type": "exclusive_write", 00:21:25.344 "zoned": false, 00:21:25.344 "supported_io_types": { 00:21:25.344 "read": true, 00:21:25.344 "write": true, 00:21:25.344 "unmap": true, 00:21:25.344 "flush": true, 00:21:25.344 "reset": true, 00:21:25.344 "nvme_admin": false, 00:21:25.344 "nvme_io": false, 00:21:25.344 "nvme_io_md": false, 00:21:25.344 "write_zeroes": true, 00:21:25.344 "zcopy": true, 00:21:25.344 "get_zone_info": false, 00:21:25.344 "zone_management": false, 00:21:25.344 "zone_append": false, 00:21:25.344 "compare": false, 00:21:25.344 "compare_and_write": false, 00:21:25.344 "abort": true, 00:21:25.344 "seek_hole": false, 00:21:25.344 "seek_data": false, 00:21:25.344 "copy": true, 00:21:25.344 "nvme_iov_md": false 00:21:25.344 }, 00:21:25.344 "memory_domains": [ 00:21:25.344 { 00:21:25.344 "dma_device_id": "system", 00:21:25.344 "dma_device_type": 1 00:21:25.344 }, 00:21:25.344 { 00:21:25.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.344 "dma_device_type": 2 00:21:25.344 } 00:21:25.344 ], 00:21:25.344 "driver_specific": {} 00:21:25.344 } 00:21:25.344 ] 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.344 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.608 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.608 "name": "Existed_Raid", 00:21:25.608 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:25.608 "strip_size_kb": 0, 00:21:25.608 "state": "online", 00:21:25.608 "raid_level": "raid1", 00:21:25.608 "superblock": true, 00:21:25.608 "num_base_bdevs": 3, 00:21:25.608 "num_base_bdevs_discovered": 3, 00:21:25.608 "num_base_bdevs_operational": 3, 00:21:25.608 "base_bdevs_list": [ 00:21:25.608 { 00:21:25.608 "name": "NewBaseBdev", 00:21:25.608 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:25.608 "is_configured": true, 00:21:25.608 "data_offset": 2048, 00:21:25.608 "data_size": 63488 00:21:25.608 }, 00:21:25.608 { 00:21:25.608 "name": "BaseBdev2", 00:21:25.608 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:25.608 "is_configured": true, 00:21:25.608 "data_offset": 2048, 00:21:25.608 "data_size": 63488 00:21:25.608 }, 00:21:25.608 { 00:21:25.608 "name": "BaseBdev3", 00:21:25.608 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:25.608 "is_configured": true, 00:21:25.608 "data_offset": 2048, 00:21:25.608 "data_size": 63488 00:21:25.608 } 00:21:25.608 ] 00:21:25.608 }' 00:21:25.608 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.608 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:26.172 [2024-07-25 18:48:26.718429] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.172 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:26.172 "name": "Existed_Raid", 00:21:26.172 "aliases": [ 00:21:26.172 "231f8512-9557-45d4-921a-76dd1f7a6b99" 00:21:26.172 ], 00:21:26.172 "product_name": "Raid Volume", 00:21:26.172 "block_size": 512, 00:21:26.172 "num_blocks": 63488, 00:21:26.172 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:26.172 "assigned_rate_limits": { 00:21:26.172 "rw_ios_per_sec": 0, 00:21:26.172 "rw_mbytes_per_sec": 0, 00:21:26.172 "r_mbytes_per_sec": 0, 00:21:26.172 "w_mbytes_per_sec": 0 00:21:26.172 }, 00:21:26.172 "claimed": false, 00:21:26.173 "zoned": false, 00:21:26.173 "supported_io_types": { 00:21:26.173 "read": true, 00:21:26.173 "write": true, 00:21:26.173 "unmap": false, 00:21:26.173 "flush": false, 00:21:26.173 "reset": true, 00:21:26.173 "nvme_admin": false, 00:21:26.173 "nvme_io": false, 00:21:26.173 "nvme_io_md": false, 00:21:26.173 "write_zeroes": true, 00:21:26.173 "zcopy": false, 00:21:26.173 "get_zone_info": false, 00:21:26.173 "zone_management": false, 00:21:26.173 "zone_append": false, 00:21:26.173 "compare": false, 00:21:26.173 "compare_and_write": false, 00:21:26.173 "abort": false, 00:21:26.173 "seek_hole": false, 00:21:26.173 "seek_data": false, 00:21:26.173 "copy": false, 00:21:26.173 "nvme_iov_md": false 00:21:26.173 }, 00:21:26.173 "memory_domains": [ 00:21:26.173 { 00:21:26.173 "dma_device_id": "system", 00:21:26.173 "dma_device_type": 1 00:21:26.173 }, 00:21:26.173 { 00:21:26.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.173 "dma_device_type": 2 00:21:26.173 }, 00:21:26.173 { 00:21:26.173 "dma_device_id": "system", 00:21:26.173 "dma_device_type": 1 00:21:26.173 }, 00:21:26.173 { 00:21:26.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.173 "dma_device_type": 2 00:21:26.173 }, 00:21:26.173 { 00:21:26.173 "dma_device_id": "system", 00:21:26.173 "dma_device_type": 1 00:21:26.173 }, 00:21:26.173 { 00:21:26.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.173 "dma_device_type": 2 00:21:26.173 } 00:21:26.173 ], 00:21:26.173 "driver_specific": { 00:21:26.173 "raid": { 00:21:26.173 "uuid": "231f8512-9557-45d4-921a-76dd1f7a6b99", 00:21:26.173 "strip_size_kb": 0, 00:21:26.173 "state": "online", 00:21:26.173 "raid_level": "raid1", 00:21:26.173 "superblock": true, 00:21:26.173 "num_base_bdevs": 3, 00:21:26.173 "num_base_bdevs_discovered": 3, 00:21:26.173 "num_base_bdevs_operational": 3, 00:21:26.173 "base_bdevs_list": [ 00:21:26.173 { 00:21:26.173 "name": "NewBaseBdev", 00:21:26.173 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:26.173 "is_configured": true, 00:21:26.173 "data_offset": 2048, 00:21:26.173 "data_size": 63488 00:21:26.173 }, 00:21:26.173 { 00:21:26.173 "name": "BaseBdev2", 00:21:26.173 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:26.173 "is_configured": true, 00:21:26.173 "data_offset": 2048, 00:21:26.173 "data_size": 63488 00:21:26.173 }, 00:21:26.173 { 00:21:26.173 "name": "BaseBdev3", 00:21:26.173 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:26.173 "is_configured": true, 00:21:26.173 "data_offset": 2048, 00:21:26.173 "data_size": 63488 00:21:26.173 } 00:21:26.173 ] 00:21:26.173 } 00:21:26.173 } 00:21:26.173 }' 00:21:26.173 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:26.431 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:26.431 BaseBdev2 00:21:26.431 BaseBdev3' 00:21:26.431 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.431 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:26.431 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.693 "name": "NewBaseBdev", 00:21:26.693 "aliases": [ 00:21:26.693 "a7006a93-1a41-40d0-a333-6ca8943cd2a5" 00:21:26.693 ], 00:21:26.693 "product_name": "Malloc disk", 00:21:26.693 "block_size": 512, 00:21:26.693 "num_blocks": 65536, 00:21:26.693 "uuid": "a7006a93-1a41-40d0-a333-6ca8943cd2a5", 00:21:26.693 "assigned_rate_limits": { 00:21:26.693 "rw_ios_per_sec": 0, 00:21:26.693 "rw_mbytes_per_sec": 0, 00:21:26.693 "r_mbytes_per_sec": 0, 00:21:26.693 "w_mbytes_per_sec": 0 00:21:26.693 }, 00:21:26.693 "claimed": true, 00:21:26.693 "claim_type": "exclusive_write", 00:21:26.693 "zoned": false, 00:21:26.693 "supported_io_types": { 00:21:26.693 "read": true, 00:21:26.693 "write": true, 00:21:26.693 "unmap": true, 00:21:26.693 "flush": true, 00:21:26.693 "reset": true, 00:21:26.693 "nvme_admin": false, 00:21:26.693 "nvme_io": false, 00:21:26.693 "nvme_io_md": false, 00:21:26.693 "write_zeroes": true, 00:21:26.693 "zcopy": true, 00:21:26.693 "get_zone_info": false, 00:21:26.693 "zone_management": false, 00:21:26.693 "zone_append": false, 00:21:26.693 "compare": false, 00:21:26.693 "compare_and_write": false, 00:21:26.693 "abort": true, 00:21:26.693 "seek_hole": false, 00:21:26.693 "seek_data": false, 00:21:26.693 "copy": true, 00:21:26.693 "nvme_iov_md": false 00:21:26.693 }, 00:21:26.693 "memory_domains": [ 00:21:26.693 { 00:21:26.693 "dma_device_id": "system", 00:21:26.693 "dma_device_type": 1 00:21:26.693 }, 00:21:26.693 { 00:21:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.693 "dma_device_type": 2 00:21:26.693 } 00:21:26.693 ], 00:21:26.693 "driver_specific": {} 00:21:26.693 }' 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.693 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.950 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:26.950 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.950 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.950 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:26.950 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.950 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.950 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:27.208 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:27.208 "name": "BaseBdev2", 00:21:27.208 "aliases": [ 00:21:27.208 "6083daea-c3dd-4a4f-abeb-0b56044dfa8d" 00:21:27.208 ], 00:21:27.208 "product_name": "Malloc disk", 00:21:27.208 "block_size": 512, 00:21:27.208 "num_blocks": 65536, 00:21:27.208 "uuid": "6083daea-c3dd-4a4f-abeb-0b56044dfa8d", 00:21:27.208 "assigned_rate_limits": { 00:21:27.208 "rw_ios_per_sec": 0, 00:21:27.208 "rw_mbytes_per_sec": 0, 00:21:27.208 "r_mbytes_per_sec": 0, 00:21:27.208 "w_mbytes_per_sec": 0 00:21:27.208 }, 00:21:27.208 "claimed": true, 00:21:27.208 "claim_type": "exclusive_write", 00:21:27.208 "zoned": false, 00:21:27.208 "supported_io_types": { 00:21:27.208 "read": true, 00:21:27.208 "write": true, 00:21:27.208 "unmap": true, 00:21:27.208 "flush": true, 00:21:27.208 "reset": true, 00:21:27.208 "nvme_admin": false, 00:21:27.208 "nvme_io": false, 00:21:27.208 "nvme_io_md": false, 00:21:27.208 "write_zeroes": true, 00:21:27.208 "zcopy": true, 00:21:27.208 "get_zone_info": false, 00:21:27.208 "zone_management": false, 00:21:27.208 "zone_append": false, 00:21:27.208 "compare": false, 00:21:27.208 "compare_and_write": false, 00:21:27.208 "abort": true, 00:21:27.208 "seek_hole": false, 00:21:27.208 "seek_data": false, 00:21:27.208 "copy": true, 00:21:27.208 "nvme_iov_md": false 00:21:27.208 }, 00:21:27.208 "memory_domains": [ 00:21:27.208 { 00:21:27.208 "dma_device_id": "system", 00:21:27.208 "dma_device_type": 1 00:21:27.208 }, 00:21:27.208 { 00:21:27.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.208 "dma_device_type": 2 00:21:27.208 } 00:21:27.208 ], 00:21:27.208 "driver_specific": {} 00:21:27.208 }' 00:21:27.208 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.208 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.208 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:27.208 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.466 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.466 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:27.466 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.466 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.466 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:27.466 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.466 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:27.466 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:27.466 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:27.466 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:27.466 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:27.724 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:27.724 "name": "BaseBdev3", 00:21:27.724 "aliases": [ 00:21:27.724 "d3e6ff51-d855-473f-b944-66a2356b0759" 00:21:27.724 ], 00:21:27.724 "product_name": "Malloc disk", 00:21:27.724 "block_size": 512, 00:21:27.724 "num_blocks": 65536, 00:21:27.724 "uuid": "d3e6ff51-d855-473f-b944-66a2356b0759", 00:21:27.724 "assigned_rate_limits": { 00:21:27.724 "rw_ios_per_sec": 0, 00:21:27.724 "rw_mbytes_per_sec": 0, 00:21:27.724 "r_mbytes_per_sec": 0, 00:21:27.724 "w_mbytes_per_sec": 0 00:21:27.724 }, 00:21:27.724 "claimed": true, 00:21:27.724 "claim_type": "exclusive_write", 00:21:27.724 "zoned": false, 00:21:27.724 "supported_io_types": { 00:21:27.724 "read": true, 00:21:27.724 "write": true, 00:21:27.724 "unmap": true, 00:21:27.724 "flush": true, 00:21:27.724 "reset": true, 00:21:27.724 "nvme_admin": false, 00:21:27.724 "nvme_io": false, 00:21:27.724 "nvme_io_md": false, 00:21:27.724 "write_zeroes": true, 00:21:27.724 "zcopy": true, 00:21:27.724 "get_zone_info": false, 00:21:27.724 "zone_management": false, 00:21:27.724 "zone_append": false, 00:21:27.724 "compare": false, 00:21:27.724 "compare_and_write": false, 00:21:27.724 "abort": true, 00:21:27.724 "seek_hole": false, 00:21:27.724 "seek_data": false, 00:21:27.724 "copy": true, 00:21:27.724 "nvme_iov_md": false 00:21:27.724 }, 00:21:27.724 "memory_domains": [ 00:21:27.724 { 00:21:27.724 "dma_device_id": "system", 00:21:27.724 "dma_device_type": 1 00:21:27.724 }, 00:21:27.724 { 00:21:27.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:27.724 "dma_device_type": 2 00:21:27.724 } 00:21:27.724 ], 00:21:27.724 "driver_specific": {} 00:21:27.724 }' 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:27.982 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:28.239 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:28.239 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.239 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:28.239 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:28.239 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:28.498 [2024-07-25 18:48:28.838455] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:28.498 [2024-07-25 18:48:28.838681] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.498 [2024-07-25 18:48:28.838940] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.498 [2024-07-25 18:48:28.839274] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.498 [2024-07-25 18:48:28.839374] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 131583 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 131583 ']' 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 131583 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131583 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131583' 00:21:28.498 killing process with pid 131583 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 131583 00:21:28.498 [2024-07-25 18:48:28.886444] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.498 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 131583 00:21:28.756 [2024-07-25 18:48:29.137578] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.132 ************************************ 00:21:30.132 END TEST raid_state_function_test_sb 00:21:30.132 ************************************ 00:21:30.132 18:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:30.132 00:21:30.132 real 0m28.956s 00:21:30.132 user 0m52.134s 00:21:30.132 sys 0m4.674s 00:21:30.132 18:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.132 18:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.132 18:48:30 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:21:30.132 18:48:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:30.132 18:48:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:30.132 18:48:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:30.132 ************************************ 00:21:30.132 START TEST raid_superblock_test 00:21:30.132 ************************************ 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=132557 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 132557 /var/tmp/spdk-raid.sock 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 132557 ']' 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:30.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.132 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.132 [2024-07-25 18:48:30.493650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:30.132 [2024-07-25 18:48:30.494147] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132557 ] 00:21:30.132 [2024-07-25 18:48:30.686100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.391 [2024-07-25 18:48:30.938482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.649 [2024-07-25 18:48:31.125582] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:30.907 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:31.165 malloc1 00:21:31.165 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:31.423 [2024-07-25 18:48:31.792608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:31.423 [2024-07-25 18:48:31.792939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.423 [2024-07-25 18:48:31.793029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:31.423 [2024-07-25 18:48:31.793136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.423 [2024-07-25 18:48:31.795926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.423 [2024-07-25 18:48:31.796101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:31.423 pt1 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:31.423 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:31.681 malloc2 00:21:31.681 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:31.681 [2024-07-25 18:48:32.227674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:31.681 [2024-07-25 18:48:32.227953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.681 [2024-07-25 18:48:32.228045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:31.681 [2024-07-25 18:48:32.228261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.681 [2024-07-25 18:48:32.230959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.681 [2024-07-25 18:48:32.231133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:31.681 pt2 00:21:31.681 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:31.681 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:31.681 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:21:31.681 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:21:31.682 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:31.682 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:31.682 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:31.682 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:31.682 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:31.939 malloc3 00:21:31.939 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:32.197 [2024-07-25 18:48:32.629236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:32.197 [2024-07-25 18:48:32.629506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.197 [2024-07-25 18:48:32.629578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:32.197 [2024-07-25 18:48:32.629674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.197 [2024-07-25 18:48:32.632301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.197 [2024-07-25 18:48:32.632465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:32.197 pt3 00:21:32.197 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:32.197 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:32.197 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:32.456 [2024-07-25 18:48:32.809417] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:32.456 [2024-07-25 18:48:32.811872] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:32.456 [2024-07-25 18:48:32.812087] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:32.456 [2024-07-25 18:48:32.812303] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:21:32.456 [2024-07-25 18:48:32.812392] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:32.456 [2024-07-25 18:48:32.812583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:32.456 [2024-07-25 18:48:32.813060] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:21:32.456 [2024-07-25 18:48:32.813157] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:21:32.456 [2024-07-25 18:48:32.813454] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.456 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.714 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.714 "name": "raid_bdev1", 00:21:32.714 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:32.714 "strip_size_kb": 0, 00:21:32.714 "state": "online", 00:21:32.714 "raid_level": "raid1", 00:21:32.714 "superblock": true, 00:21:32.714 "num_base_bdevs": 3, 00:21:32.714 "num_base_bdevs_discovered": 3, 00:21:32.714 "num_base_bdevs_operational": 3, 00:21:32.714 "base_bdevs_list": [ 00:21:32.714 { 00:21:32.714 "name": "pt1", 00:21:32.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:32.714 "is_configured": true, 00:21:32.714 "data_offset": 2048, 00:21:32.714 "data_size": 63488 00:21:32.714 }, 00:21:32.714 { 00:21:32.714 "name": "pt2", 00:21:32.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:32.714 "is_configured": true, 00:21:32.714 "data_offset": 2048, 00:21:32.715 "data_size": 63488 00:21:32.715 }, 00:21:32.715 { 00:21:32.715 "name": "pt3", 00:21:32.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:32.715 "is_configured": true, 00:21:32.715 "data_offset": 2048, 00:21:32.715 "data_size": 63488 00:21:32.715 } 00:21:32.715 ] 00:21:32.715 }' 00:21:32.715 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.715 18:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:33.289 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:33.289 [2024-07-25 18:48:33.849810] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.547 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:33.547 "name": "raid_bdev1", 00:21:33.547 "aliases": [ 00:21:33.547 "406e0b30-d645-4621-b0fe-fb91f9235044" 00:21:33.547 ], 00:21:33.547 "product_name": "Raid Volume", 00:21:33.547 "block_size": 512, 00:21:33.547 "num_blocks": 63488, 00:21:33.547 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:33.547 "assigned_rate_limits": { 00:21:33.547 "rw_ios_per_sec": 0, 00:21:33.547 "rw_mbytes_per_sec": 0, 00:21:33.547 "r_mbytes_per_sec": 0, 00:21:33.547 "w_mbytes_per_sec": 0 00:21:33.547 }, 00:21:33.547 "claimed": false, 00:21:33.547 "zoned": false, 00:21:33.547 "supported_io_types": { 00:21:33.547 "read": true, 00:21:33.547 "write": true, 00:21:33.547 "unmap": false, 00:21:33.547 "flush": false, 00:21:33.547 "reset": true, 00:21:33.547 "nvme_admin": false, 00:21:33.547 "nvme_io": false, 00:21:33.547 "nvme_io_md": false, 00:21:33.547 "write_zeroes": true, 00:21:33.547 "zcopy": false, 00:21:33.547 "get_zone_info": false, 00:21:33.547 "zone_management": false, 00:21:33.547 "zone_append": false, 00:21:33.547 "compare": false, 00:21:33.547 "compare_and_write": false, 00:21:33.547 "abort": false, 00:21:33.547 "seek_hole": false, 00:21:33.547 "seek_data": false, 00:21:33.547 "copy": false, 00:21:33.547 "nvme_iov_md": false 00:21:33.547 }, 00:21:33.547 "memory_domains": [ 00:21:33.547 { 00:21:33.547 "dma_device_id": "system", 00:21:33.547 "dma_device_type": 1 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.547 "dma_device_type": 2 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "dma_device_id": "system", 00:21:33.547 "dma_device_type": 1 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.547 "dma_device_type": 2 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "dma_device_id": "system", 00:21:33.547 "dma_device_type": 1 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.547 "dma_device_type": 2 00:21:33.547 } 00:21:33.547 ], 00:21:33.547 "driver_specific": { 00:21:33.547 "raid": { 00:21:33.547 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:33.547 "strip_size_kb": 0, 00:21:33.547 "state": "online", 00:21:33.547 "raid_level": "raid1", 00:21:33.547 "superblock": true, 00:21:33.547 "num_base_bdevs": 3, 00:21:33.547 "num_base_bdevs_discovered": 3, 00:21:33.547 "num_base_bdevs_operational": 3, 00:21:33.547 "base_bdevs_list": [ 00:21:33.547 { 00:21:33.547 "name": "pt1", 00:21:33.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.547 "is_configured": true, 00:21:33.547 "data_offset": 2048, 00:21:33.547 "data_size": 63488 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "name": "pt2", 00:21:33.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:33.547 "is_configured": true, 00:21:33.547 "data_offset": 2048, 00:21:33.547 "data_size": 63488 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "name": "pt3", 00:21:33.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:33.547 "is_configured": true, 00:21:33.547 "data_offset": 2048, 00:21:33.547 "data_size": 63488 00:21:33.547 } 00:21:33.547 ] 00:21:33.547 } 00:21:33.547 } 00:21:33.547 }' 00:21:33.547 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:33.547 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:33.547 pt2 00:21:33.547 pt3' 00:21:33.547 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:33.547 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:33.547 18:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:33.547 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:33.547 "name": "pt1", 00:21:33.547 "aliases": [ 00:21:33.547 "00000000-0000-0000-0000-000000000001" 00:21:33.547 ], 00:21:33.547 "product_name": "passthru", 00:21:33.547 "block_size": 512, 00:21:33.547 "num_blocks": 65536, 00:21:33.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:33.547 "assigned_rate_limits": { 00:21:33.547 "rw_ios_per_sec": 0, 00:21:33.547 "rw_mbytes_per_sec": 0, 00:21:33.547 "r_mbytes_per_sec": 0, 00:21:33.547 "w_mbytes_per_sec": 0 00:21:33.547 }, 00:21:33.547 "claimed": true, 00:21:33.547 "claim_type": "exclusive_write", 00:21:33.547 "zoned": false, 00:21:33.547 "supported_io_types": { 00:21:33.547 "read": true, 00:21:33.547 "write": true, 00:21:33.547 "unmap": true, 00:21:33.547 "flush": true, 00:21:33.547 "reset": true, 00:21:33.547 "nvme_admin": false, 00:21:33.547 "nvme_io": false, 00:21:33.547 "nvme_io_md": false, 00:21:33.547 "write_zeroes": true, 00:21:33.547 "zcopy": true, 00:21:33.547 "get_zone_info": false, 00:21:33.547 "zone_management": false, 00:21:33.547 "zone_append": false, 00:21:33.547 "compare": false, 00:21:33.547 "compare_and_write": false, 00:21:33.547 "abort": true, 00:21:33.547 "seek_hole": false, 00:21:33.547 "seek_data": false, 00:21:33.547 "copy": true, 00:21:33.547 "nvme_iov_md": false 00:21:33.547 }, 00:21:33.547 "memory_domains": [ 00:21:33.547 { 00:21:33.547 "dma_device_id": "system", 00:21:33.547 "dma_device_type": 1 00:21:33.547 }, 00:21:33.547 { 00:21:33.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.547 "dma_device_type": 2 00:21:33.547 } 00:21:33.547 ], 00:21:33.547 "driver_specific": { 00:21:33.547 "passthru": { 00:21:33.547 "name": "pt1", 00:21:33.547 "base_bdev_name": "malloc1" 00:21:33.547 } 00:21:33.547 } 00:21:33.547 }' 00:21:33.547 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.547 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.805 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.062 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.062 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.062 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.062 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.062 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:34.062 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.063 "name": "pt2", 00:21:34.063 "aliases": [ 00:21:34.063 "00000000-0000-0000-0000-000000000002" 00:21:34.063 ], 00:21:34.063 "product_name": "passthru", 00:21:34.063 "block_size": 512, 00:21:34.063 "num_blocks": 65536, 00:21:34.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:34.063 "assigned_rate_limits": { 00:21:34.063 "rw_ios_per_sec": 0, 00:21:34.063 "rw_mbytes_per_sec": 0, 00:21:34.063 "r_mbytes_per_sec": 0, 00:21:34.063 "w_mbytes_per_sec": 0 00:21:34.063 }, 00:21:34.063 "claimed": true, 00:21:34.063 "claim_type": "exclusive_write", 00:21:34.063 "zoned": false, 00:21:34.063 "supported_io_types": { 00:21:34.063 "read": true, 00:21:34.063 "write": true, 00:21:34.063 "unmap": true, 00:21:34.063 "flush": true, 00:21:34.063 "reset": true, 00:21:34.063 "nvme_admin": false, 00:21:34.063 "nvme_io": false, 00:21:34.063 "nvme_io_md": false, 00:21:34.063 "write_zeroes": true, 00:21:34.063 "zcopy": true, 00:21:34.063 "get_zone_info": false, 00:21:34.063 "zone_management": false, 00:21:34.063 "zone_append": false, 00:21:34.063 "compare": false, 00:21:34.063 "compare_and_write": false, 00:21:34.063 "abort": true, 00:21:34.063 "seek_hole": false, 00:21:34.063 "seek_data": false, 00:21:34.063 "copy": true, 00:21:34.063 "nvme_iov_md": false 00:21:34.063 }, 00:21:34.063 "memory_domains": [ 00:21:34.063 { 00:21:34.063 "dma_device_id": "system", 00:21:34.063 "dma_device_type": 1 00:21:34.063 }, 00:21:34.063 { 00:21:34.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.063 "dma_device_type": 2 00:21:34.063 } 00:21:34.063 ], 00:21:34.063 "driver_specific": { 00:21:34.063 "passthru": { 00:21:34.063 "name": "pt2", 00:21:34.063 "base_bdev_name": "malloc2" 00:21:34.063 } 00:21:34.063 } 00:21:34.063 }' 00:21:34.063 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.320 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.577 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.577 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.577 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.577 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.577 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:34.577 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.835 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.835 "name": "pt3", 00:21:34.835 "aliases": [ 00:21:34.835 "00000000-0000-0000-0000-000000000003" 00:21:34.835 ], 00:21:34.835 "product_name": "passthru", 00:21:34.835 "block_size": 512, 00:21:34.835 "num_blocks": 65536, 00:21:34.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:34.835 "assigned_rate_limits": { 00:21:34.835 "rw_ios_per_sec": 0, 00:21:34.835 "rw_mbytes_per_sec": 0, 00:21:34.835 "r_mbytes_per_sec": 0, 00:21:34.835 "w_mbytes_per_sec": 0 00:21:34.835 }, 00:21:34.835 "claimed": true, 00:21:34.835 "claim_type": "exclusive_write", 00:21:34.835 "zoned": false, 00:21:34.835 "supported_io_types": { 00:21:34.835 "read": true, 00:21:34.835 "write": true, 00:21:34.835 "unmap": true, 00:21:34.835 "flush": true, 00:21:34.835 "reset": true, 00:21:34.835 "nvme_admin": false, 00:21:34.835 "nvme_io": false, 00:21:34.835 "nvme_io_md": false, 00:21:34.835 "write_zeroes": true, 00:21:34.835 "zcopy": true, 00:21:34.835 "get_zone_info": false, 00:21:34.835 "zone_management": false, 00:21:34.835 "zone_append": false, 00:21:34.835 "compare": false, 00:21:34.835 "compare_and_write": false, 00:21:34.835 "abort": true, 00:21:34.835 "seek_hole": false, 00:21:34.835 "seek_data": false, 00:21:34.835 "copy": true, 00:21:34.835 "nvme_iov_md": false 00:21:34.835 }, 00:21:34.835 "memory_domains": [ 00:21:34.835 { 00:21:34.835 "dma_device_id": "system", 00:21:34.835 "dma_device_type": 1 00:21:34.835 }, 00:21:34.835 { 00:21:34.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.835 "dma_device_type": 2 00:21:34.835 } 00:21:34.835 ], 00:21:34.835 "driver_specific": { 00:21:34.835 "passthru": { 00:21:34.835 "name": "pt3", 00:21:34.835 "base_bdev_name": "malloc3" 00:21:34.835 } 00:21:34.835 } 00:21:34.835 }' 00:21:34.835 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.835 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.835 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:34.835 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.835 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:35.093 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:21:35.351 [2024-07-25 18:48:35.866197] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.351 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=406e0b30-d645-4621-b0fe-fb91f9235044 00:21:35.351 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 406e0b30-d645-4621-b0fe-fb91f9235044 ']' 00:21:35.351 18:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:35.609 [2024-07-25 18:48:36.046015] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.609 [2024-07-25 18:48:36.046237] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.609 [2024-07-25 18:48:36.046456] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.609 [2024-07-25 18:48:36.046681] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.609 [2024-07-25 18:48:36.046807] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:21:35.609 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.609 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:21:35.867 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:21:35.867 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:21:35.867 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:35.867 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:35.867 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:35.867 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:36.125 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:36.125 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:36.383 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:36.383 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:36.642 18:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:36.642 [2024-07-25 18:48:37.170464] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:36.642 [2024-07-25 18:48:37.172920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:36.642 [2024-07-25 18:48:37.173105] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:36.642 [2024-07-25 18:48:37.173193] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:36.642 [2024-07-25 18:48:37.173437] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:36.642 [2024-07-25 18:48:37.173583] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:36.642 [2024-07-25 18:48:37.173697] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.642 [2024-07-25 18:48:37.173733] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:21:36.642 request: 00:21:36.642 { 00:21:36.642 "name": "raid_bdev1", 00:21:36.642 "raid_level": "raid1", 00:21:36.642 "base_bdevs": [ 00:21:36.642 "malloc1", 00:21:36.642 "malloc2", 00:21:36.642 "malloc3" 00:21:36.642 ], 00:21:36.642 "superblock": false, 00:21:36.642 "method": "bdev_raid_create", 00:21:36.642 "req_id": 1 00:21:36.642 } 00:21:36.642 Got JSON-RPC error response 00:21:36.642 response: 00:21:36.642 { 00:21:36.642 "code": -17, 00:21:36.642 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:36.642 } 00:21:36.642 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:36.642 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:36.642 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:36.642 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:36.642 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.642 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:21:36.900 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:21:36.900 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:21:36.900 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.159 [2024-07-25 18:48:37.594466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.159 [2024-07-25 18:48:37.594755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.159 [2024-07-25 18:48:37.594830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:37.159 [2024-07-25 18:48:37.594923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.159 [2024-07-25 18:48:37.597585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.159 [2024-07-25 18:48:37.597744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.159 [2024-07-25 18:48:37.597983] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:37.159 [2024-07-25 18:48:37.598120] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:37.159 pt1 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.159 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.418 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.418 "name": "raid_bdev1", 00:21:37.418 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:37.418 "strip_size_kb": 0, 00:21:37.418 "state": "configuring", 00:21:37.418 "raid_level": "raid1", 00:21:37.418 "superblock": true, 00:21:37.418 "num_base_bdevs": 3, 00:21:37.418 "num_base_bdevs_discovered": 1, 00:21:37.418 "num_base_bdevs_operational": 3, 00:21:37.418 "base_bdevs_list": [ 00:21:37.418 { 00:21:37.418 "name": "pt1", 00:21:37.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:37.418 "is_configured": true, 00:21:37.418 "data_offset": 2048, 00:21:37.418 "data_size": 63488 00:21:37.418 }, 00:21:37.418 { 00:21:37.418 "name": null, 00:21:37.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.418 "is_configured": false, 00:21:37.418 "data_offset": 2048, 00:21:37.418 "data_size": 63488 00:21:37.418 }, 00:21:37.418 { 00:21:37.418 "name": null, 00:21:37.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:37.418 "is_configured": false, 00:21:37.418 "data_offset": 2048, 00:21:37.418 "data_size": 63488 00:21:37.418 } 00:21:37.418 ] 00:21:37.418 }' 00:21:37.418 18:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.418 18:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.984 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:21:37.984 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.243 [2024-07-25 18:48:38.622680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.243 [2024-07-25 18:48:38.623012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.243 [2024-07-25 18:48:38.623090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:38.243 [2024-07-25 18:48:38.623281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.243 [2024-07-25 18:48:38.623866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.243 [2024-07-25 18:48:38.624015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.243 [2024-07-25 18:48:38.624216] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:38.243 [2024-07-25 18:48:38.624318] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.243 pt2 00:21:38.243 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:38.501 [2024-07-25 18:48:38.874745] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.501 18:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.501 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.501 "name": "raid_bdev1", 00:21:38.501 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:38.501 "strip_size_kb": 0, 00:21:38.501 "state": "configuring", 00:21:38.501 "raid_level": "raid1", 00:21:38.501 "superblock": true, 00:21:38.501 "num_base_bdevs": 3, 00:21:38.501 "num_base_bdevs_discovered": 1, 00:21:38.501 "num_base_bdevs_operational": 3, 00:21:38.501 "base_bdevs_list": [ 00:21:38.501 { 00:21:38.501 "name": "pt1", 00:21:38.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.501 "is_configured": true, 00:21:38.501 "data_offset": 2048, 00:21:38.501 "data_size": 63488 00:21:38.501 }, 00:21:38.501 { 00:21:38.501 "name": null, 00:21:38.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.501 "is_configured": false, 00:21:38.501 "data_offset": 2048, 00:21:38.501 "data_size": 63488 00:21:38.501 }, 00:21:38.501 { 00:21:38.501 "name": null, 00:21:38.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.501 "is_configured": false, 00:21:38.501 "data_offset": 2048, 00:21:38.501 "data_size": 63488 00:21:38.501 } 00:21:38.501 ] 00:21:38.501 }' 00:21:38.501 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.501 18:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.068 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:21:39.068 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:39.068 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:39.327 [2024-07-25 18:48:39.746865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:39.327 [2024-07-25 18:48:39.747181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.327 [2024-07-25 18:48:39.747251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:39.327 [2024-07-25 18:48:39.747360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.327 [2024-07-25 18:48:39.747931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.327 [2024-07-25 18:48:39.748083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:39.327 [2024-07-25 18:48:39.748291] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:39.327 [2024-07-25 18:48:39.748398] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:39.327 pt2 00:21:39.327 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:39.327 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:39.327 18:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:39.585 [2024-07-25 18:48:40.010943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:39.585 [2024-07-25 18:48:40.011269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.585 [2024-07-25 18:48:40.011342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:39.585 [2024-07-25 18:48:40.011468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.585 [2024-07-25 18:48:40.012137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.585 [2024-07-25 18:48:40.012293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:39.585 [2024-07-25 18:48:40.012543] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:39.585 [2024-07-25 18:48:40.012646] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:39.585 [2024-07-25 18:48:40.012841] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:21:39.585 [2024-07-25 18:48:40.012951] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:39.585 [2024-07-25 18:48:40.013076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:39.585 [2024-07-25 18:48:40.013570] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:21:39.585 [2024-07-25 18:48:40.013676] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:21:39.585 [2024-07-25 18:48:40.013929] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.585 pt3 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.585 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.844 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.844 "name": "raid_bdev1", 00:21:39.844 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:39.844 "strip_size_kb": 0, 00:21:39.844 "state": "online", 00:21:39.844 "raid_level": "raid1", 00:21:39.844 "superblock": true, 00:21:39.844 "num_base_bdevs": 3, 00:21:39.844 "num_base_bdevs_discovered": 3, 00:21:39.844 "num_base_bdevs_operational": 3, 00:21:39.844 "base_bdevs_list": [ 00:21:39.844 { 00:21:39.844 "name": "pt1", 00:21:39.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.844 "is_configured": true, 00:21:39.844 "data_offset": 2048, 00:21:39.844 "data_size": 63488 00:21:39.844 }, 00:21:39.844 { 00:21:39.844 "name": "pt2", 00:21:39.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.844 "is_configured": true, 00:21:39.844 "data_offset": 2048, 00:21:39.844 "data_size": 63488 00:21:39.844 }, 00:21:39.844 { 00:21:39.844 "name": "pt3", 00:21:39.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.844 "is_configured": true, 00:21:39.844 "data_offset": 2048, 00:21:39.844 "data_size": 63488 00:21:39.844 } 00:21:39.844 ] 00:21:39.844 }' 00:21:39.844 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.844 18:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:40.410 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:40.410 [2024-07-25 18:48:40.983339] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.669 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:40.669 "name": "raid_bdev1", 00:21:40.669 "aliases": [ 00:21:40.669 "406e0b30-d645-4621-b0fe-fb91f9235044" 00:21:40.669 ], 00:21:40.669 "product_name": "Raid Volume", 00:21:40.669 "block_size": 512, 00:21:40.669 "num_blocks": 63488, 00:21:40.669 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:40.669 "assigned_rate_limits": { 00:21:40.669 "rw_ios_per_sec": 0, 00:21:40.669 "rw_mbytes_per_sec": 0, 00:21:40.669 "r_mbytes_per_sec": 0, 00:21:40.669 "w_mbytes_per_sec": 0 00:21:40.669 }, 00:21:40.669 "claimed": false, 00:21:40.669 "zoned": false, 00:21:40.669 "supported_io_types": { 00:21:40.669 "read": true, 00:21:40.669 "write": true, 00:21:40.669 "unmap": false, 00:21:40.669 "flush": false, 00:21:40.669 "reset": true, 00:21:40.669 "nvme_admin": false, 00:21:40.669 "nvme_io": false, 00:21:40.669 "nvme_io_md": false, 00:21:40.669 "write_zeroes": true, 00:21:40.669 "zcopy": false, 00:21:40.669 "get_zone_info": false, 00:21:40.669 "zone_management": false, 00:21:40.669 "zone_append": false, 00:21:40.669 "compare": false, 00:21:40.669 "compare_and_write": false, 00:21:40.669 "abort": false, 00:21:40.669 "seek_hole": false, 00:21:40.669 "seek_data": false, 00:21:40.669 "copy": false, 00:21:40.669 "nvme_iov_md": false 00:21:40.669 }, 00:21:40.669 "memory_domains": [ 00:21:40.669 { 00:21:40.669 "dma_device_id": "system", 00:21:40.669 "dma_device_type": 1 00:21:40.669 }, 00:21:40.669 { 00:21:40.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.669 "dma_device_type": 2 00:21:40.669 }, 00:21:40.669 { 00:21:40.669 "dma_device_id": "system", 00:21:40.669 "dma_device_type": 1 00:21:40.669 }, 00:21:40.669 { 00:21:40.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.669 "dma_device_type": 2 00:21:40.669 }, 00:21:40.669 { 00:21:40.669 "dma_device_id": "system", 00:21:40.669 "dma_device_type": 1 00:21:40.669 }, 00:21:40.669 { 00:21:40.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.669 "dma_device_type": 2 00:21:40.669 } 00:21:40.669 ], 00:21:40.669 "driver_specific": { 00:21:40.669 "raid": { 00:21:40.669 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:40.669 "strip_size_kb": 0, 00:21:40.669 "state": "online", 00:21:40.669 "raid_level": "raid1", 00:21:40.669 "superblock": true, 00:21:40.669 "num_base_bdevs": 3, 00:21:40.669 "num_base_bdevs_discovered": 3, 00:21:40.669 "num_base_bdevs_operational": 3, 00:21:40.670 "base_bdevs_list": [ 00:21:40.670 { 00:21:40.670 "name": "pt1", 00:21:40.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.670 "is_configured": true, 00:21:40.670 "data_offset": 2048, 00:21:40.670 "data_size": 63488 00:21:40.670 }, 00:21:40.670 { 00:21:40.670 "name": "pt2", 00:21:40.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.670 "is_configured": true, 00:21:40.670 "data_offset": 2048, 00:21:40.670 "data_size": 63488 00:21:40.670 }, 00:21:40.670 { 00:21:40.670 "name": "pt3", 00:21:40.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.670 "is_configured": true, 00:21:40.670 "data_offset": 2048, 00:21:40.670 "data_size": 63488 00:21:40.670 } 00:21:40.670 ] 00:21:40.670 } 00:21:40.670 } 00:21:40.670 }' 00:21:40.670 18:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.670 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:40.670 pt2 00:21:40.670 pt3' 00:21:40.670 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:40.670 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:40.670 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:40.928 "name": "pt1", 00:21:40.928 "aliases": [ 00:21:40.928 "00000000-0000-0000-0000-000000000001" 00:21:40.928 ], 00:21:40.928 "product_name": "passthru", 00:21:40.928 "block_size": 512, 00:21:40.928 "num_blocks": 65536, 00:21:40.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.928 "assigned_rate_limits": { 00:21:40.928 "rw_ios_per_sec": 0, 00:21:40.928 "rw_mbytes_per_sec": 0, 00:21:40.928 "r_mbytes_per_sec": 0, 00:21:40.928 "w_mbytes_per_sec": 0 00:21:40.928 }, 00:21:40.928 "claimed": true, 00:21:40.928 "claim_type": "exclusive_write", 00:21:40.928 "zoned": false, 00:21:40.928 "supported_io_types": { 00:21:40.928 "read": true, 00:21:40.928 "write": true, 00:21:40.928 "unmap": true, 00:21:40.928 "flush": true, 00:21:40.928 "reset": true, 00:21:40.928 "nvme_admin": false, 00:21:40.928 "nvme_io": false, 00:21:40.928 "nvme_io_md": false, 00:21:40.928 "write_zeroes": true, 00:21:40.928 "zcopy": true, 00:21:40.928 "get_zone_info": false, 00:21:40.928 "zone_management": false, 00:21:40.928 "zone_append": false, 00:21:40.928 "compare": false, 00:21:40.928 "compare_and_write": false, 00:21:40.928 "abort": true, 00:21:40.928 "seek_hole": false, 00:21:40.928 "seek_data": false, 00:21:40.928 "copy": true, 00:21:40.928 "nvme_iov_md": false 00:21:40.928 }, 00:21:40.928 "memory_domains": [ 00:21:40.928 { 00:21:40.928 "dma_device_id": "system", 00:21:40.928 "dma_device_type": 1 00:21:40.928 }, 00:21:40.928 { 00:21:40.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.928 "dma_device_type": 2 00:21:40.928 } 00:21:40.928 ], 00:21:40.928 "driver_specific": { 00:21:40.928 "passthru": { 00:21:40.928 "name": "pt1", 00:21:40.928 "base_bdev_name": "malloc1" 00:21:40.928 } 00:21:40.928 } 00:21:40.928 }' 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.928 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.187 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:41.187 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.187 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.187 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:41.187 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:41.187 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.187 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:41.445 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.445 "name": "pt2", 00:21:41.445 "aliases": [ 00:21:41.445 "00000000-0000-0000-0000-000000000002" 00:21:41.445 ], 00:21:41.445 "product_name": "passthru", 00:21:41.445 "block_size": 512, 00:21:41.445 "num_blocks": 65536, 00:21:41.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.445 "assigned_rate_limits": { 00:21:41.445 "rw_ios_per_sec": 0, 00:21:41.445 "rw_mbytes_per_sec": 0, 00:21:41.445 "r_mbytes_per_sec": 0, 00:21:41.445 "w_mbytes_per_sec": 0 00:21:41.445 }, 00:21:41.445 "claimed": true, 00:21:41.445 "claim_type": "exclusive_write", 00:21:41.445 "zoned": false, 00:21:41.445 "supported_io_types": { 00:21:41.445 "read": true, 00:21:41.445 "write": true, 00:21:41.445 "unmap": true, 00:21:41.445 "flush": true, 00:21:41.445 "reset": true, 00:21:41.445 "nvme_admin": false, 00:21:41.445 "nvme_io": false, 00:21:41.445 "nvme_io_md": false, 00:21:41.445 "write_zeroes": true, 00:21:41.445 "zcopy": true, 00:21:41.445 "get_zone_info": false, 00:21:41.445 "zone_management": false, 00:21:41.445 "zone_append": false, 00:21:41.445 "compare": false, 00:21:41.445 "compare_and_write": false, 00:21:41.445 "abort": true, 00:21:41.445 "seek_hole": false, 00:21:41.445 "seek_data": false, 00:21:41.445 "copy": true, 00:21:41.445 "nvme_iov_md": false 00:21:41.445 }, 00:21:41.445 "memory_domains": [ 00:21:41.445 { 00:21:41.445 "dma_device_id": "system", 00:21:41.445 "dma_device_type": 1 00:21:41.445 }, 00:21:41.445 { 00:21:41.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.445 "dma_device_type": 2 00:21:41.445 } 00:21:41.445 ], 00:21:41.445 "driver_specific": { 00:21:41.445 "passthru": { 00:21:41.446 "name": "pt2", 00:21:41.446 "base_bdev_name": "malloc2" 00:21:41.446 } 00:21:41.446 } 00:21:41.446 }' 00:21:41.446 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.446 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.446 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.446 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.446 18:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:41.703 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.962 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.962 "name": "pt3", 00:21:41.962 "aliases": [ 00:21:41.962 "00000000-0000-0000-0000-000000000003" 00:21:41.962 ], 00:21:41.962 "product_name": "passthru", 00:21:41.962 "block_size": 512, 00:21:41.962 "num_blocks": 65536, 00:21:41.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.962 "assigned_rate_limits": { 00:21:41.962 "rw_ios_per_sec": 0, 00:21:41.962 "rw_mbytes_per_sec": 0, 00:21:41.962 "r_mbytes_per_sec": 0, 00:21:41.962 "w_mbytes_per_sec": 0 00:21:41.962 }, 00:21:41.962 "claimed": true, 00:21:41.962 "claim_type": "exclusive_write", 00:21:41.962 "zoned": false, 00:21:41.962 "supported_io_types": { 00:21:41.962 "read": true, 00:21:41.962 "write": true, 00:21:41.962 "unmap": true, 00:21:41.962 "flush": true, 00:21:41.962 "reset": true, 00:21:41.962 "nvme_admin": false, 00:21:41.962 "nvme_io": false, 00:21:41.962 "nvme_io_md": false, 00:21:41.962 "write_zeroes": true, 00:21:41.962 "zcopy": true, 00:21:41.962 "get_zone_info": false, 00:21:41.962 "zone_management": false, 00:21:41.962 "zone_append": false, 00:21:41.962 "compare": false, 00:21:41.962 "compare_and_write": false, 00:21:41.962 "abort": true, 00:21:41.962 "seek_hole": false, 00:21:41.962 "seek_data": false, 00:21:41.962 "copy": true, 00:21:41.962 "nvme_iov_md": false 00:21:41.962 }, 00:21:41.962 "memory_domains": [ 00:21:41.962 { 00:21:41.962 "dma_device_id": "system", 00:21:41.962 "dma_device_type": 1 00:21:41.962 }, 00:21:41.962 { 00:21:41.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.962 "dma_device_type": 2 00:21:41.962 } 00:21:41.962 ], 00:21:41.962 "driver_specific": { 00:21:41.962 "passthru": { 00:21:41.962 "name": "pt3", 00:21:41.962 "base_bdev_name": "malloc3" 00:21:41.962 } 00:21:41.962 } 00:21:41.962 }' 00:21:41.962 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.962 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.962 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.962 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:42.220 18:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:21:42.478 [2024-07-25 18:48:42.987663] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.478 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 406e0b30-d645-4621-b0fe-fb91f9235044 '!=' 406e0b30-d645-4621-b0fe-fb91f9235044 ']' 00:21:42.478 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:21:42.478 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:42.478 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:42.478 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:42.759 [2024-07-25 18:48:43.255585] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.759 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.033 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.033 "name": "raid_bdev1", 00:21:43.033 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:43.033 "strip_size_kb": 0, 00:21:43.033 "state": "online", 00:21:43.033 "raid_level": "raid1", 00:21:43.033 "superblock": true, 00:21:43.033 "num_base_bdevs": 3, 00:21:43.033 "num_base_bdevs_discovered": 2, 00:21:43.033 "num_base_bdevs_operational": 2, 00:21:43.033 "base_bdevs_list": [ 00:21:43.033 { 00:21:43.033 "name": null, 00:21:43.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.033 "is_configured": false, 00:21:43.033 "data_offset": 2048, 00:21:43.033 "data_size": 63488 00:21:43.033 }, 00:21:43.033 { 00:21:43.033 "name": "pt2", 00:21:43.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.033 "is_configured": true, 00:21:43.033 "data_offset": 2048, 00:21:43.033 "data_size": 63488 00:21:43.033 }, 00:21:43.033 { 00:21:43.033 "name": "pt3", 00:21:43.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:43.033 "is_configured": true, 00:21:43.033 "data_offset": 2048, 00:21:43.033 "data_size": 63488 00:21:43.033 } 00:21:43.033 ] 00:21:43.033 }' 00:21:43.033 18:48:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.033 18:48:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.598 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:43.855 [2024-07-25 18:48:44.227666] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.855 [2024-07-25 18:48:44.227837] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.855 [2024-07-25 18:48:44.228065] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.855 [2024-07-25 18:48:44.228222] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.855 [2024-07-25 18:48:44.228305] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:21:43.855 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.855 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:21:44.113 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:21:44.113 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:21:44.113 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:44.113 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:44.113 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:44.371 18:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:44.629 [2024-07-25 18:48:45.084706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:44.629 [2024-07-25 18:48:45.085000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.629 [2024-07-25 18:48:45.085075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:44.629 [2024-07-25 18:48:45.085193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.629 [2024-07-25 18:48:45.087863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.629 [2024-07-25 18:48:45.088034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:44.629 [2024-07-25 18:48:45.088301] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:44.629 [2024-07-25 18:48:45.088430] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.629 pt2 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.629 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.885 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.885 "name": "raid_bdev1", 00:21:44.885 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:44.885 "strip_size_kb": 0, 00:21:44.885 "state": "configuring", 00:21:44.885 "raid_level": "raid1", 00:21:44.885 "superblock": true, 00:21:44.885 "num_base_bdevs": 3, 00:21:44.885 "num_base_bdevs_discovered": 1, 00:21:44.885 "num_base_bdevs_operational": 2, 00:21:44.885 "base_bdevs_list": [ 00:21:44.885 { 00:21:44.885 "name": null, 00:21:44.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.885 "is_configured": false, 00:21:44.885 "data_offset": 2048, 00:21:44.885 "data_size": 63488 00:21:44.885 }, 00:21:44.885 { 00:21:44.885 "name": "pt2", 00:21:44.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.885 "is_configured": true, 00:21:44.885 "data_offset": 2048, 00:21:44.885 "data_size": 63488 00:21:44.885 }, 00:21:44.885 { 00:21:44.885 "name": null, 00:21:44.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.885 "is_configured": false, 00:21:44.885 "data_offset": 2048, 00:21:44.885 "data_size": 63488 00:21:44.885 } 00:21:44.885 ] 00:21:44.885 }' 00:21:44.885 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.885 18:48:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.451 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:21:45.451 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:45.451 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:45.451 18:48:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:45.708 [2024-07-25 18:48:46.144939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:45.708 [2024-07-25 18:48:46.145235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.708 [2024-07-25 18:48:46.145344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:45.708 [2024-07-25 18:48:46.145457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.708 [2024-07-25 18:48:46.146040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.708 [2024-07-25 18:48:46.146185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:45.708 [2024-07-25 18:48:46.146399] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:45.708 [2024-07-25 18:48:46.146530] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:45.709 [2024-07-25 18:48:46.146720] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:21:45.709 [2024-07-25 18:48:46.146809] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:45.709 [2024-07-25 18:48:46.146961] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:45.709 [2024-07-25 18:48:46.147406] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:21:45.709 [2024-07-25 18:48:46.147513] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:21:45.709 [2024-07-25 18:48:46.147754] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.709 pt3 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.709 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.966 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.966 "name": "raid_bdev1", 00:21:45.966 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:45.966 "strip_size_kb": 0, 00:21:45.966 "state": "online", 00:21:45.966 "raid_level": "raid1", 00:21:45.966 "superblock": true, 00:21:45.966 "num_base_bdevs": 3, 00:21:45.966 "num_base_bdevs_discovered": 2, 00:21:45.966 "num_base_bdevs_operational": 2, 00:21:45.966 "base_bdevs_list": [ 00:21:45.966 { 00:21:45.966 "name": null, 00:21:45.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.966 "is_configured": false, 00:21:45.966 "data_offset": 2048, 00:21:45.966 "data_size": 63488 00:21:45.966 }, 00:21:45.966 { 00:21:45.966 "name": "pt2", 00:21:45.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.966 "is_configured": true, 00:21:45.966 "data_offset": 2048, 00:21:45.966 "data_size": 63488 00:21:45.966 }, 00:21:45.966 { 00:21:45.966 "name": "pt3", 00:21:45.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:45.966 "is_configured": true, 00:21:45.966 "data_offset": 2048, 00:21:45.966 "data_size": 63488 00:21:45.966 } 00:21:45.966 ] 00:21:45.966 }' 00:21:45.966 18:48:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.966 18:48:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.532 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:46.792 [2024-07-25 18:48:47.193102] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.792 [2024-07-25 18:48:47.193284] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.792 [2024-07-25 18:48:47.193516] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.792 [2024-07-25 18:48:47.193675] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.792 [2024-07-25 18:48:47.193751] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:21:46.792 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.792 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:21:47.050 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:21:47.050 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:21:47.050 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:21:47.050 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:21:47.050 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:47.307 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:47.307 [2024-07-25 18:48:47.817194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:47.307 [2024-07-25 18:48:47.817455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.307 [2024-07-25 18:48:47.817534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:47.307 [2024-07-25 18:48:47.817646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.307 [2024-07-25 18:48:47.820400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.307 [2024-07-25 18:48:47.820593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:47.307 [2024-07-25 18:48:47.820803] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:47.308 [2024-07-25 18:48:47.820928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:47.308 [2024-07-25 18:48:47.821147] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:47.308 [2024-07-25 18:48:47.821252] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:47.308 [2024-07-25 18:48:47.821301] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:21:47.308 [2024-07-25 18:48:47.821402] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:47.308 pt1 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.308 18:48:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.565 18:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.566 "name": "raid_bdev1", 00:21:47.566 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:47.566 "strip_size_kb": 0, 00:21:47.566 "state": "configuring", 00:21:47.566 "raid_level": "raid1", 00:21:47.566 "superblock": true, 00:21:47.566 "num_base_bdevs": 3, 00:21:47.566 "num_base_bdevs_discovered": 1, 00:21:47.566 "num_base_bdevs_operational": 2, 00:21:47.566 "base_bdevs_list": [ 00:21:47.566 { 00:21:47.566 "name": null, 00:21:47.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.566 "is_configured": false, 00:21:47.566 "data_offset": 2048, 00:21:47.566 "data_size": 63488 00:21:47.566 }, 00:21:47.566 { 00:21:47.566 "name": "pt2", 00:21:47.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.566 "is_configured": true, 00:21:47.566 "data_offset": 2048, 00:21:47.566 "data_size": 63488 00:21:47.566 }, 00:21:47.566 { 00:21:47.566 "name": null, 00:21:47.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:47.566 "is_configured": false, 00:21:47.566 "data_offset": 2048, 00:21:47.566 "data_size": 63488 00:21:47.566 } 00:21:47.566 ] 00:21:47.566 }' 00:21:47.566 18:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.566 18:48:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.131 18:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:48.131 18:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:48.388 18:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:21:48.388 18:48:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:48.645 [2024-07-25 18:48:49.137473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:48.645 [2024-07-25 18:48:49.137755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.645 [2024-07-25 18:48:49.137844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:48.645 [2024-07-25 18:48:49.137957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.645 [2024-07-25 18:48:49.138535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.645 [2024-07-25 18:48:49.138690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:48.645 [2024-07-25 18:48:49.138893] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:48.645 [2024-07-25 18:48:49.139015] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:48.645 [2024-07-25 18:48:49.139194] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:21:48.645 [2024-07-25 18:48:49.139300] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:48.645 [2024-07-25 18:48:49.139453] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:21:48.645 [2024-07-25 18:48:49.139914] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:21:48.645 [2024-07-25 18:48:49.140022] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:21:48.645 [2024-07-25 18:48:49.140245] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.645 pt3 00:21:48.645 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.646 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.903 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.903 "name": "raid_bdev1", 00:21:48.903 "uuid": "406e0b30-d645-4621-b0fe-fb91f9235044", 00:21:48.903 "strip_size_kb": 0, 00:21:48.903 "state": "online", 00:21:48.903 "raid_level": "raid1", 00:21:48.903 "superblock": true, 00:21:48.903 "num_base_bdevs": 3, 00:21:48.903 "num_base_bdevs_discovered": 2, 00:21:48.903 "num_base_bdevs_operational": 2, 00:21:48.903 "base_bdevs_list": [ 00:21:48.903 { 00:21:48.903 "name": null, 00:21:48.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.903 "is_configured": false, 00:21:48.903 "data_offset": 2048, 00:21:48.903 "data_size": 63488 00:21:48.903 }, 00:21:48.903 { 00:21:48.903 "name": "pt2", 00:21:48.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.903 "is_configured": true, 00:21:48.903 "data_offset": 2048, 00:21:48.903 "data_size": 63488 00:21:48.903 }, 00:21:48.903 { 00:21:48.903 "name": "pt3", 00:21:48.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:48.903 "is_configured": true, 00:21:48.903 "data_offset": 2048, 00:21:48.903 "data_size": 63488 00:21:48.903 } 00:21:48.903 ] 00:21:48.903 }' 00:21:48.903 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.903 18:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.468 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:49.468 18:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:49.725 18:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:21:49.725 18:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:49.725 18:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:21:49.983 [2024-07-25 18:48:50.481919] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 406e0b30-d645-4621-b0fe-fb91f9235044 '!=' 406e0b30-d645-4621-b0fe-fb91f9235044 ']' 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 132557 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 132557 ']' 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 132557 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132557 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132557' 00:21:49.983 killing process with pid 132557 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 132557 00:21:49.983 [2024-07-25 18:48:50.532213] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.983 18:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 132557 00:21:49.983 [2024-07-25 18:48:50.532443] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.983 [2024-07-25 18:48:50.532695] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.983 [2024-07-25 18:48:50.532782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:21:50.242 [2024-07-25 18:48:50.787811] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:51.614 18:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:21:51.614 ************************************ 00:21:51.614 END TEST raid_superblock_test 00:21:51.614 ************************************ 00:21:51.614 00:21:51.614 real 0m21.568s 00:21:51.614 user 0m38.260s 00:21:51.614 sys 0m3.699s 00:21:51.614 18:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.614 18:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.614 18:48:52 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:51.614 18:48:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:51.614 18:48:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.614 18:48:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:51.614 ************************************ 00:21:51.614 START TEST raid_read_error_test 00:21:51.614 ************************************ 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:51.614 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Ti6Wems6cC 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=133288 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 133288 /var/tmp/spdk-raid.sock 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 133288 ']' 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:51.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.615 18:48:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.615 [2024-07-25 18:48:52.154206] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:51.615 [2024-07-25 18:48:52.154998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133288 ] 00:21:51.872 [2024-07-25 18:48:52.341713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.131 [2024-07-25 18:48:52.602622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.389 [2024-07-25 18:48:52.872040] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.648 18:48:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:52.648 18:48:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:52.648 18:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:52.648 18:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:52.906 BaseBdev1_malloc 00:21:52.906 18:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:53.164 true 00:21:53.164 18:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:53.423 [2024-07-25 18:48:53.744280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:53.423 [2024-07-25 18:48:53.744583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.423 [2024-07-25 18:48:53.744718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:53.423 [2024-07-25 18:48:53.744806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.423 [2024-07-25 18:48:53.747594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.423 [2024-07-25 18:48:53.747778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:53.423 BaseBdev1 00:21:53.423 18:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:53.423 18:48:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:53.682 BaseBdev2_malloc 00:21:53.682 18:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:53.682 true 00:21:53.682 18:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:53.940 [2024-07-25 18:48:54.410041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:53.940 [2024-07-25 18:48:54.410365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.940 [2024-07-25 18:48:54.410540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:53.940 [2024-07-25 18:48:54.410650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.940 [2024-07-25 18:48:54.413331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.940 [2024-07-25 18:48:54.413489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:53.940 BaseBdev2 00:21:53.940 18:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:53.940 18:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:54.198 BaseBdev3_malloc 00:21:54.198 18:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:54.456 true 00:21:54.456 18:48:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:54.456 [2024-07-25 18:48:55.001835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:54.456 [2024-07-25 18:48:55.002103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.456 [2024-07-25 18:48:55.002224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:54.456 [2024-07-25 18:48:55.002321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.456 [2024-07-25 18:48:55.005050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.456 [2024-07-25 18:48:55.005215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:54.456 BaseBdev3 00:21:54.456 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:54.715 [2024-07-25 18:48:55.178151] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.715 [2024-07-25 18:48:55.180564] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.715 [2024-07-25 18:48:55.180778] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.715 [2024-07-25 18:48:55.181013] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:21:54.715 [2024-07-25 18:48:55.181116] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:54.715 [2024-07-25 18:48:55.181301] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:54.715 [2024-07-25 18:48:55.181804] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:21:54.715 [2024-07-25 18:48:55.181904] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:21:54.715 [2024-07-25 18:48:55.182205] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.716 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.974 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.974 "name": "raid_bdev1", 00:21:54.974 "uuid": "e8db6308-ce07-4291-88f7-e1b701f58fc4", 00:21:54.974 "strip_size_kb": 0, 00:21:54.974 "state": "online", 00:21:54.974 "raid_level": "raid1", 00:21:54.974 "superblock": true, 00:21:54.974 "num_base_bdevs": 3, 00:21:54.974 "num_base_bdevs_discovered": 3, 00:21:54.974 "num_base_bdevs_operational": 3, 00:21:54.974 "base_bdevs_list": [ 00:21:54.974 { 00:21:54.975 "name": "BaseBdev1", 00:21:54.975 "uuid": "4fce39a7-11ca-5356-8492-977ab417e216", 00:21:54.975 "is_configured": true, 00:21:54.975 "data_offset": 2048, 00:21:54.975 "data_size": 63488 00:21:54.975 }, 00:21:54.975 { 00:21:54.975 "name": "BaseBdev2", 00:21:54.975 "uuid": "0971d7a9-1932-5254-a08f-4715fa530352", 00:21:54.975 "is_configured": true, 00:21:54.975 "data_offset": 2048, 00:21:54.975 "data_size": 63488 00:21:54.975 }, 00:21:54.975 { 00:21:54.975 "name": "BaseBdev3", 00:21:54.975 "uuid": "9d35f131-3c45-5654-8ba1-fa8a8984e380", 00:21:54.975 "is_configured": true, 00:21:54.975 "data_offset": 2048, 00:21:54.975 "data_size": 63488 00:21:54.975 } 00:21:54.975 ] 00:21:54.975 }' 00:21:54.975 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.975 18:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.542 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:55.542 18:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:55.542 [2024-07-25 18:48:55.940123] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:56.477 18:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.736 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.736 "name": "raid_bdev1", 00:21:56.736 "uuid": "e8db6308-ce07-4291-88f7-e1b701f58fc4", 00:21:56.736 "strip_size_kb": 0, 00:21:56.736 "state": "online", 00:21:56.736 "raid_level": "raid1", 00:21:56.736 "superblock": true, 00:21:56.736 "num_base_bdevs": 3, 00:21:56.736 "num_base_bdevs_discovered": 3, 00:21:56.736 "num_base_bdevs_operational": 3, 00:21:56.736 "base_bdevs_list": [ 00:21:56.736 { 00:21:56.736 "name": "BaseBdev1", 00:21:56.736 "uuid": "4fce39a7-11ca-5356-8492-977ab417e216", 00:21:56.736 "is_configured": true, 00:21:56.736 "data_offset": 2048, 00:21:56.736 "data_size": 63488 00:21:56.736 }, 00:21:56.736 { 00:21:56.737 "name": "BaseBdev2", 00:21:56.737 "uuid": "0971d7a9-1932-5254-a08f-4715fa530352", 00:21:56.737 "is_configured": true, 00:21:56.737 "data_offset": 2048, 00:21:56.737 "data_size": 63488 00:21:56.737 }, 00:21:56.737 { 00:21:56.737 "name": "BaseBdev3", 00:21:56.737 "uuid": "9d35f131-3c45-5654-8ba1-fa8a8984e380", 00:21:56.737 "is_configured": true, 00:21:56.737 "data_offset": 2048, 00:21:56.737 "data_size": 63488 00:21:56.737 } 00:21:56.737 ] 00:21:56.737 }' 00:21:56.737 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.737 18:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.723 18:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:57.723 [2024-07-25 18:48:58.080802] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.723 [2024-07-25 18:48:58.081124] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.723 [2024-07-25 18:48:58.083803] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.723 [2024-07-25 18:48:58.083987] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.723 [2024-07-25 18:48:58.084117] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.723 [2024-07-25 18:48:58.084332] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:21:57.723 0 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 133288 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 133288 ']' 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 133288 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133288 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133288' 00:21:57.723 killing process with pid 133288 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 133288 00:21:57.723 18:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 133288 00:21:57.723 [2024-07-25 18:48:58.129321] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.982 [2024-07-25 18:48:58.379696] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Ti6Wems6cC 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:59.358 ************************************ 00:21:59.358 END TEST raid_read_error_test 00:21:59.358 ************************************ 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:59.358 00:21:59.358 real 0m7.881s 00:21:59.358 user 0m11.083s 00:21:59.358 sys 0m1.244s 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.358 18:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.616 18:48:59 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:59.616 18:48:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:59.616 18:48:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.616 18:48:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.616 ************************************ 00:21:59.616 START TEST raid_write_error_test 00:21:59.616 ************************************ 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:21:59.616 18:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.yvDS4FxoTE 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=133493 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 133493 /var/tmp/spdk-raid.sock 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 133493 ']' 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:59.616 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:59.617 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.617 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:59.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:59.617 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.617 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.617 [2024-07-25 18:49:00.107902] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:59.617 [2024-07-25 18:49:00.108422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133493 ] 00:21:59.874 [2024-07-25 18:49:00.294187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.133 [2024-07-25 18:49:00.532701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.391 [2024-07-25 18:49:00.799127] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.648 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.648 18:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:00.648 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:00.648 18:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:00.905 BaseBdev1_malloc 00:22:00.905 18:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:01.163 true 00:22:01.163 18:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:01.163 [2024-07-25 18:49:01.699152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:01.163 [2024-07-25 18:49:01.699461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.163 [2024-07-25 18:49:01.699629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:22:01.163 [2024-07-25 18:49:01.699721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.163 [2024-07-25 18:49:01.702309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.163 [2024-07-25 18:49:01.702472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:01.163 BaseBdev1 00:22:01.163 18:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:01.163 18:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:01.421 BaseBdev2_malloc 00:22:01.421 18:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:01.679 true 00:22:01.679 18:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:01.936 [2024-07-25 18:49:02.347636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:01.936 [2024-07-25 18:49:02.347978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.936 [2024-07-25 18:49:02.348119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:01.936 [2024-07-25 18:49:02.348207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.936 [2024-07-25 18:49:02.350971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.936 [2024-07-25 18:49:02.351156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:01.936 BaseBdev2 00:22:01.936 18:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:01.936 18:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:02.197 BaseBdev3_malloc 00:22:02.197 18:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:02.197 true 00:22:02.197 18:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:02.454 [2024-07-25 18:49:02.928305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:02.454 [2024-07-25 18:49:02.928591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.454 [2024-07-25 18:49:02.928682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:02.454 [2024-07-25 18:49:02.928777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.454 [2024-07-25 18:49:02.931512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.454 [2024-07-25 18:49:02.931696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:02.454 BaseBdev3 00:22:02.454 18:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:02.712 [2024-07-25 18:49:03.112430] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.712 [2024-07-25 18:49:03.114875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.712 [2024-07-25 18:49:03.115075] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.712 [2024-07-25 18:49:03.115393] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:22:02.712 [2024-07-25 18:49:03.115491] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:02.712 [2024-07-25 18:49:03.115677] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:02.712 [2024-07-25 18:49:03.116126] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:22:02.712 [2024-07-25 18:49:03.116223] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:22:02.712 [2024-07-25 18:49:03.116507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.712 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.713 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.713 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.713 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.976 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.976 "name": "raid_bdev1", 00:22:02.976 "uuid": "0e9c3848-c220-4832-90ae-1f003ba0c717", 00:22:02.976 "strip_size_kb": 0, 00:22:02.976 "state": "online", 00:22:02.976 "raid_level": "raid1", 00:22:02.976 "superblock": true, 00:22:02.976 "num_base_bdevs": 3, 00:22:02.976 "num_base_bdevs_discovered": 3, 00:22:02.976 "num_base_bdevs_operational": 3, 00:22:02.976 "base_bdevs_list": [ 00:22:02.976 { 00:22:02.976 "name": "BaseBdev1", 00:22:02.976 "uuid": "37cb95ed-1b44-5c81-9c60-4fadff5de05c", 00:22:02.976 "is_configured": true, 00:22:02.977 "data_offset": 2048, 00:22:02.977 "data_size": 63488 00:22:02.977 }, 00:22:02.977 { 00:22:02.977 "name": "BaseBdev2", 00:22:02.977 "uuid": "506aadfa-a9a6-589c-a427-c589a38cb504", 00:22:02.977 "is_configured": true, 00:22:02.977 "data_offset": 2048, 00:22:02.977 "data_size": 63488 00:22:02.977 }, 00:22:02.977 { 00:22:02.977 "name": "BaseBdev3", 00:22:02.977 "uuid": "aab5b493-4fa4-55af-9248-40621a7f10a6", 00:22:02.977 "is_configured": true, 00:22:02.977 "data_offset": 2048, 00:22:02.977 "data_size": 63488 00:22:02.977 } 00:22:02.977 ] 00:22:02.977 }' 00:22:02.977 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.977 18:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.546 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:03.546 18:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:03.546 [2024-07-25 18:49:03.898266] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:04.479 18:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:04.737 [2024-07-25 18:49:05.072630] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:04.737 [2024-07-25 18:49:05.073042] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.737 [2024-07-25 18:49:05.073340] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:04.737 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.738 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.738 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.738 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.738 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.738 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.995 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.995 "name": "raid_bdev1", 00:22:04.995 "uuid": "0e9c3848-c220-4832-90ae-1f003ba0c717", 00:22:04.995 "strip_size_kb": 0, 00:22:04.995 "state": "online", 00:22:04.995 "raid_level": "raid1", 00:22:04.995 "superblock": true, 00:22:04.995 "num_base_bdevs": 3, 00:22:04.995 "num_base_bdevs_discovered": 2, 00:22:04.995 "num_base_bdevs_operational": 2, 00:22:04.995 "base_bdevs_list": [ 00:22:04.995 { 00:22:04.995 "name": null, 00:22:04.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.995 "is_configured": false, 00:22:04.995 "data_offset": 2048, 00:22:04.995 "data_size": 63488 00:22:04.995 }, 00:22:04.995 { 00:22:04.995 "name": "BaseBdev2", 00:22:04.995 "uuid": "506aadfa-a9a6-589c-a427-c589a38cb504", 00:22:04.995 "is_configured": true, 00:22:04.995 "data_offset": 2048, 00:22:04.995 "data_size": 63488 00:22:04.995 }, 00:22:04.995 { 00:22:04.995 "name": "BaseBdev3", 00:22:04.995 "uuid": "aab5b493-4fa4-55af-9248-40621a7f10a6", 00:22:04.995 "is_configured": true, 00:22:04.995 "data_offset": 2048, 00:22:04.995 "data_size": 63488 00:22:04.995 } 00:22:04.995 ] 00:22:04.995 }' 00:22:04.995 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.995 18:49:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.562 18:49:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:05.562 [2024-07-25 18:49:06.117575] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.562 [2024-07-25 18:49:06.117918] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.562 [2024-07-25 18:49:06.120566] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.562 [2024-07-25 18:49:06.120736] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.562 [2024-07-25 18:49:06.120846] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.562 [2024-07-25 18:49:06.120921] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:22:05.562 0 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 133493 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 133493 ']' 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 133493 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133493 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133493' 00:22:05.821 killing process with pid 133493 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 133493 00:22:05.821 18:49:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 133493 00:22:05.821 [2024-07-25 18:49:06.170623] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:06.080 [2024-07-25 18:49:06.423239] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.yvDS4FxoTE 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:07.458 00:22:07.458 real 0m7.989s 00:22:07.458 user 0m11.267s 00:22:07.458 sys 0m1.238s 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:07.458 18:49:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.458 ************************************ 00:22:07.458 END TEST raid_write_error_test 00:22:07.458 ************************************ 00:22:07.717 18:49:08 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:22:07.717 18:49:08 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:22:07.717 18:49:08 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:22:07.717 18:49:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:07.717 18:49:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:07.717 18:49:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.717 ************************************ 00:22:07.717 START TEST raid_state_function_test 00:22:07.717 ************************************ 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=133688 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 133688' 00:22:07.717 Process raid pid: 133688 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 133688 /var/tmp/spdk-raid.sock 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 133688 ']' 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:07.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.717 18:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.717 [2024-07-25 18:49:08.139949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:07.717 [2024-07-25 18:49:08.140361] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.975 [2024-07-25 18:49:08.306438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.975 [2024-07-25 18:49:08.524239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.233 [2024-07-25 18:49:08.718027] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:08.801 [2024-07-25 18:49:09.234982] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.801 [2024-07-25 18:49:09.235320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.801 [2024-07-25 18:49:09.235418] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.801 [2024-07-25 18:49:09.235527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.801 [2024-07-25 18:49:09.235603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:08.801 [2024-07-25 18:49:09.235651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:08.801 [2024-07-25 18:49:09.235724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:08.801 [2024-07-25 18:49:09.235826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.801 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.059 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.059 "name": "Existed_Raid", 00:22:09.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.059 "strip_size_kb": 64, 00:22:09.059 "state": "configuring", 00:22:09.059 "raid_level": "raid0", 00:22:09.059 "superblock": false, 00:22:09.059 "num_base_bdevs": 4, 00:22:09.059 "num_base_bdevs_discovered": 0, 00:22:09.059 "num_base_bdevs_operational": 4, 00:22:09.059 "base_bdevs_list": [ 00:22:09.059 { 00:22:09.059 "name": "BaseBdev1", 00:22:09.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.059 "is_configured": false, 00:22:09.059 "data_offset": 0, 00:22:09.059 "data_size": 0 00:22:09.059 }, 00:22:09.059 { 00:22:09.059 "name": "BaseBdev2", 00:22:09.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.059 "is_configured": false, 00:22:09.059 "data_offset": 0, 00:22:09.059 "data_size": 0 00:22:09.059 }, 00:22:09.059 { 00:22:09.059 "name": "BaseBdev3", 00:22:09.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.059 "is_configured": false, 00:22:09.059 "data_offset": 0, 00:22:09.059 "data_size": 0 00:22:09.059 }, 00:22:09.059 { 00:22:09.059 "name": "BaseBdev4", 00:22:09.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.059 "is_configured": false, 00:22:09.059 "data_offset": 0, 00:22:09.059 "data_size": 0 00:22:09.059 } 00:22:09.059 ] 00:22:09.059 }' 00:22:09.059 18:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.059 18:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.625 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:09.883 [2024-07-25 18:49:10.303070] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:09.883 [2024-07-25 18:49:10.303346] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:22:09.883 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:10.141 [2024-07-25 18:49:10.487122] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:10.141 [2024-07-25 18:49:10.487444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:10.141 [2024-07-25 18:49:10.487534] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:10.141 [2024-07-25 18:49:10.487617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:10.141 [2024-07-25 18:49:10.487717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:10.141 [2024-07-25 18:49:10.487848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:10.141 [2024-07-25 18:49:10.487919] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:10.141 [2024-07-25 18:49:10.487972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:10.141 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:10.399 [2024-07-25 18:49:10.759657] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.399 BaseBdev1 00:22:10.399 18:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:10.399 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:10.399 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:10.399 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:10.399 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:10.399 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:10.399 18:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:10.656 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:10.914 [ 00:22:10.914 { 00:22:10.914 "name": "BaseBdev1", 00:22:10.914 "aliases": [ 00:22:10.914 "14f267ff-0821-4a4b-910c-d2704939ed30" 00:22:10.914 ], 00:22:10.914 "product_name": "Malloc disk", 00:22:10.914 "block_size": 512, 00:22:10.914 "num_blocks": 65536, 00:22:10.914 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:10.914 "assigned_rate_limits": { 00:22:10.914 "rw_ios_per_sec": 0, 00:22:10.914 "rw_mbytes_per_sec": 0, 00:22:10.914 "r_mbytes_per_sec": 0, 00:22:10.914 "w_mbytes_per_sec": 0 00:22:10.914 }, 00:22:10.914 "claimed": true, 00:22:10.914 "claim_type": "exclusive_write", 00:22:10.914 "zoned": false, 00:22:10.914 "supported_io_types": { 00:22:10.914 "read": true, 00:22:10.914 "write": true, 00:22:10.914 "unmap": true, 00:22:10.914 "flush": true, 00:22:10.914 "reset": true, 00:22:10.914 "nvme_admin": false, 00:22:10.914 "nvme_io": false, 00:22:10.914 "nvme_io_md": false, 00:22:10.914 "write_zeroes": true, 00:22:10.914 "zcopy": true, 00:22:10.914 "get_zone_info": false, 00:22:10.914 "zone_management": false, 00:22:10.914 "zone_append": false, 00:22:10.914 "compare": false, 00:22:10.914 "compare_and_write": false, 00:22:10.914 "abort": true, 00:22:10.914 "seek_hole": false, 00:22:10.914 "seek_data": false, 00:22:10.914 "copy": true, 00:22:10.914 "nvme_iov_md": false 00:22:10.914 }, 00:22:10.914 "memory_domains": [ 00:22:10.914 { 00:22:10.914 "dma_device_id": "system", 00:22:10.914 "dma_device_type": 1 00:22:10.914 }, 00:22:10.914 { 00:22:10.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.914 "dma_device_type": 2 00:22:10.914 } 00:22:10.914 ], 00:22:10.914 "driver_specific": {} 00:22:10.914 } 00:22:10.914 ] 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.914 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.172 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.172 "name": "Existed_Raid", 00:22:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.172 "strip_size_kb": 64, 00:22:11.172 "state": "configuring", 00:22:11.172 "raid_level": "raid0", 00:22:11.172 "superblock": false, 00:22:11.172 "num_base_bdevs": 4, 00:22:11.172 "num_base_bdevs_discovered": 1, 00:22:11.172 "num_base_bdevs_operational": 4, 00:22:11.172 "base_bdevs_list": [ 00:22:11.172 { 00:22:11.172 "name": "BaseBdev1", 00:22:11.172 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:11.172 "is_configured": true, 00:22:11.172 "data_offset": 0, 00:22:11.172 "data_size": 65536 00:22:11.172 }, 00:22:11.172 { 00:22:11.172 "name": "BaseBdev2", 00:22:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.172 "is_configured": false, 00:22:11.172 "data_offset": 0, 00:22:11.172 "data_size": 0 00:22:11.172 }, 00:22:11.172 { 00:22:11.172 "name": "BaseBdev3", 00:22:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.172 "is_configured": false, 00:22:11.172 "data_offset": 0, 00:22:11.172 "data_size": 0 00:22:11.172 }, 00:22:11.172 { 00:22:11.172 "name": "BaseBdev4", 00:22:11.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.172 "is_configured": false, 00:22:11.172 "data_offset": 0, 00:22:11.172 "data_size": 0 00:22:11.172 } 00:22:11.172 ] 00:22:11.172 }' 00:22:11.172 18:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.172 18:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.763 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:11.763 [2024-07-25 18:49:12.231983] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:11.763 [2024-07-25 18:49:12.232312] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:22:11.763 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:12.021 [2024-07-25 18:49:12.476067] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.021 [2024-07-25 18:49:12.478589] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.021 [2024-07-25 18:49:12.478767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.021 [2024-07-25 18:49:12.478847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:12.021 [2024-07-25 18:49:12.478948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:12.021 [2024-07-25 18:49:12.479037] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:12.021 [2024-07-25 18:49:12.479085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.021 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.279 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.279 "name": "Existed_Raid", 00:22:12.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.279 "strip_size_kb": 64, 00:22:12.279 "state": "configuring", 00:22:12.279 "raid_level": "raid0", 00:22:12.279 "superblock": false, 00:22:12.279 "num_base_bdevs": 4, 00:22:12.279 "num_base_bdevs_discovered": 1, 00:22:12.279 "num_base_bdevs_operational": 4, 00:22:12.279 "base_bdevs_list": [ 00:22:12.279 { 00:22:12.279 "name": "BaseBdev1", 00:22:12.279 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:12.279 "is_configured": true, 00:22:12.279 "data_offset": 0, 00:22:12.279 "data_size": 65536 00:22:12.279 }, 00:22:12.279 { 00:22:12.279 "name": "BaseBdev2", 00:22:12.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.279 "is_configured": false, 00:22:12.279 "data_offset": 0, 00:22:12.279 "data_size": 0 00:22:12.279 }, 00:22:12.279 { 00:22:12.279 "name": "BaseBdev3", 00:22:12.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.279 "is_configured": false, 00:22:12.279 "data_offset": 0, 00:22:12.279 "data_size": 0 00:22:12.279 }, 00:22:12.279 { 00:22:12.279 "name": "BaseBdev4", 00:22:12.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.279 "is_configured": false, 00:22:12.279 "data_offset": 0, 00:22:12.279 "data_size": 0 00:22:12.279 } 00:22:12.279 ] 00:22:12.279 }' 00:22:12.279 18:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.279 18:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:12.845 [2024-07-25 18:49:13.392777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:12.845 BaseBdev2 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:12.845 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:13.102 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:13.360 [ 00:22:13.360 { 00:22:13.360 "name": "BaseBdev2", 00:22:13.360 "aliases": [ 00:22:13.360 "9d7de131-ffb7-4f18-be38-edd49fe2dd09" 00:22:13.360 ], 00:22:13.360 "product_name": "Malloc disk", 00:22:13.360 "block_size": 512, 00:22:13.360 "num_blocks": 65536, 00:22:13.360 "uuid": "9d7de131-ffb7-4f18-be38-edd49fe2dd09", 00:22:13.360 "assigned_rate_limits": { 00:22:13.360 "rw_ios_per_sec": 0, 00:22:13.360 "rw_mbytes_per_sec": 0, 00:22:13.360 "r_mbytes_per_sec": 0, 00:22:13.360 "w_mbytes_per_sec": 0 00:22:13.360 }, 00:22:13.360 "claimed": true, 00:22:13.360 "claim_type": "exclusive_write", 00:22:13.360 "zoned": false, 00:22:13.360 "supported_io_types": { 00:22:13.360 "read": true, 00:22:13.360 "write": true, 00:22:13.360 "unmap": true, 00:22:13.360 "flush": true, 00:22:13.360 "reset": true, 00:22:13.360 "nvme_admin": false, 00:22:13.360 "nvme_io": false, 00:22:13.360 "nvme_io_md": false, 00:22:13.360 "write_zeroes": true, 00:22:13.360 "zcopy": true, 00:22:13.360 "get_zone_info": false, 00:22:13.360 "zone_management": false, 00:22:13.360 "zone_append": false, 00:22:13.360 "compare": false, 00:22:13.360 "compare_and_write": false, 00:22:13.360 "abort": true, 00:22:13.360 "seek_hole": false, 00:22:13.360 "seek_data": false, 00:22:13.360 "copy": true, 00:22:13.360 "nvme_iov_md": false 00:22:13.360 }, 00:22:13.360 "memory_domains": [ 00:22:13.360 { 00:22:13.360 "dma_device_id": "system", 00:22:13.360 "dma_device_type": 1 00:22:13.360 }, 00:22:13.360 { 00:22:13.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.360 "dma_device_type": 2 00:22:13.360 } 00:22:13.360 ], 00:22:13.360 "driver_specific": {} 00:22:13.360 } 00:22:13.360 ] 00:22:13.360 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:13.360 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:13.360 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:13.360 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:13.360 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.360 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.361 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.618 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.618 "name": "Existed_Raid", 00:22:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.618 "strip_size_kb": 64, 00:22:13.618 "state": "configuring", 00:22:13.618 "raid_level": "raid0", 00:22:13.618 "superblock": false, 00:22:13.618 "num_base_bdevs": 4, 00:22:13.618 "num_base_bdevs_discovered": 2, 00:22:13.618 "num_base_bdevs_operational": 4, 00:22:13.618 "base_bdevs_list": [ 00:22:13.618 { 00:22:13.618 "name": "BaseBdev1", 00:22:13.618 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:13.618 "is_configured": true, 00:22:13.618 "data_offset": 0, 00:22:13.618 "data_size": 65536 00:22:13.618 }, 00:22:13.618 { 00:22:13.618 "name": "BaseBdev2", 00:22:13.618 "uuid": "9d7de131-ffb7-4f18-be38-edd49fe2dd09", 00:22:13.618 "is_configured": true, 00:22:13.618 "data_offset": 0, 00:22:13.618 "data_size": 65536 00:22:13.618 }, 00:22:13.618 { 00:22:13.618 "name": "BaseBdev3", 00:22:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.618 "is_configured": false, 00:22:13.618 "data_offset": 0, 00:22:13.618 "data_size": 0 00:22:13.618 }, 00:22:13.618 { 00:22:13.618 "name": "BaseBdev4", 00:22:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.618 "is_configured": false, 00:22:13.618 "data_offset": 0, 00:22:13.618 "data_size": 0 00:22:13.618 } 00:22:13.618 ] 00:22:13.618 }' 00:22:13.618 18:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.618 18:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.184 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:14.442 [2024-07-25 18:49:14.817791] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:14.442 BaseBdev3 00:22:14.442 18:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:14.442 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:14.442 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:14.442 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:14.442 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:14.442 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:14.442 18:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:14.701 [ 00:22:14.701 { 00:22:14.701 "name": "BaseBdev3", 00:22:14.701 "aliases": [ 00:22:14.701 "25d578c2-aa02-4541-b3a8-9ef4ed965a3b" 00:22:14.701 ], 00:22:14.701 "product_name": "Malloc disk", 00:22:14.701 "block_size": 512, 00:22:14.701 "num_blocks": 65536, 00:22:14.701 "uuid": "25d578c2-aa02-4541-b3a8-9ef4ed965a3b", 00:22:14.701 "assigned_rate_limits": { 00:22:14.701 "rw_ios_per_sec": 0, 00:22:14.701 "rw_mbytes_per_sec": 0, 00:22:14.701 "r_mbytes_per_sec": 0, 00:22:14.701 "w_mbytes_per_sec": 0 00:22:14.701 }, 00:22:14.701 "claimed": true, 00:22:14.701 "claim_type": "exclusive_write", 00:22:14.701 "zoned": false, 00:22:14.701 "supported_io_types": { 00:22:14.701 "read": true, 00:22:14.701 "write": true, 00:22:14.701 "unmap": true, 00:22:14.701 "flush": true, 00:22:14.701 "reset": true, 00:22:14.701 "nvme_admin": false, 00:22:14.701 "nvme_io": false, 00:22:14.701 "nvme_io_md": false, 00:22:14.701 "write_zeroes": true, 00:22:14.701 "zcopy": true, 00:22:14.701 "get_zone_info": false, 00:22:14.701 "zone_management": false, 00:22:14.701 "zone_append": false, 00:22:14.701 "compare": false, 00:22:14.701 "compare_and_write": false, 00:22:14.701 "abort": true, 00:22:14.701 "seek_hole": false, 00:22:14.701 "seek_data": false, 00:22:14.701 "copy": true, 00:22:14.701 "nvme_iov_md": false 00:22:14.701 }, 00:22:14.701 "memory_domains": [ 00:22:14.701 { 00:22:14.701 "dma_device_id": "system", 00:22:14.701 "dma_device_type": 1 00:22:14.701 }, 00:22:14.701 { 00:22:14.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.701 "dma_device_type": 2 00:22:14.701 } 00:22:14.701 ], 00:22:14.701 "driver_specific": {} 00:22:14.701 } 00:22:14.701 ] 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.701 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.959 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.959 "name": "Existed_Raid", 00:22:14.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.959 "strip_size_kb": 64, 00:22:14.959 "state": "configuring", 00:22:14.959 "raid_level": "raid0", 00:22:14.959 "superblock": false, 00:22:14.959 "num_base_bdevs": 4, 00:22:14.959 "num_base_bdevs_discovered": 3, 00:22:14.959 "num_base_bdevs_operational": 4, 00:22:14.959 "base_bdevs_list": [ 00:22:14.959 { 00:22:14.959 "name": "BaseBdev1", 00:22:14.959 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:14.959 "is_configured": true, 00:22:14.959 "data_offset": 0, 00:22:14.959 "data_size": 65536 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "name": "BaseBdev2", 00:22:14.959 "uuid": "9d7de131-ffb7-4f18-be38-edd49fe2dd09", 00:22:14.959 "is_configured": true, 00:22:14.959 "data_offset": 0, 00:22:14.959 "data_size": 65536 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "name": "BaseBdev3", 00:22:14.959 "uuid": "25d578c2-aa02-4541-b3a8-9ef4ed965a3b", 00:22:14.959 "is_configured": true, 00:22:14.959 "data_offset": 0, 00:22:14.959 "data_size": 65536 00:22:14.959 }, 00:22:14.959 { 00:22:14.959 "name": "BaseBdev4", 00:22:14.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.959 "is_configured": false, 00:22:14.959 "data_offset": 0, 00:22:14.959 "data_size": 0 00:22:14.959 } 00:22:14.959 ] 00:22:14.959 }' 00:22:14.959 18:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.959 18:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.525 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:15.784 [2024-07-25 18:49:16.226984] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:15.784 [2024-07-25 18:49:16.227304] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:22:15.784 [2024-07-25 18:49:16.227345] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:15.784 [2024-07-25 18:49:16.227548] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:15.784 [2024-07-25 18:49:16.228044] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:22:15.784 [2024-07-25 18:49:16.228153] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:22:15.784 [2024-07-25 18:49:16.228498] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.784 BaseBdev4 00:22:15.784 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:15.784 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:15.784 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:15.784 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:15.784 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:15.784 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:15.784 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:16.042 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:16.301 [ 00:22:16.301 { 00:22:16.301 "name": "BaseBdev4", 00:22:16.301 "aliases": [ 00:22:16.301 "c02a667e-e72e-4850-b945-d708edec9877" 00:22:16.301 ], 00:22:16.301 "product_name": "Malloc disk", 00:22:16.301 "block_size": 512, 00:22:16.301 "num_blocks": 65536, 00:22:16.301 "uuid": "c02a667e-e72e-4850-b945-d708edec9877", 00:22:16.301 "assigned_rate_limits": { 00:22:16.301 "rw_ios_per_sec": 0, 00:22:16.301 "rw_mbytes_per_sec": 0, 00:22:16.301 "r_mbytes_per_sec": 0, 00:22:16.301 "w_mbytes_per_sec": 0 00:22:16.301 }, 00:22:16.301 "claimed": true, 00:22:16.301 "claim_type": "exclusive_write", 00:22:16.301 "zoned": false, 00:22:16.301 "supported_io_types": { 00:22:16.301 "read": true, 00:22:16.301 "write": true, 00:22:16.301 "unmap": true, 00:22:16.301 "flush": true, 00:22:16.301 "reset": true, 00:22:16.301 "nvme_admin": false, 00:22:16.301 "nvme_io": false, 00:22:16.301 "nvme_io_md": false, 00:22:16.301 "write_zeroes": true, 00:22:16.301 "zcopy": true, 00:22:16.301 "get_zone_info": false, 00:22:16.301 "zone_management": false, 00:22:16.301 "zone_append": false, 00:22:16.301 "compare": false, 00:22:16.301 "compare_and_write": false, 00:22:16.301 "abort": true, 00:22:16.301 "seek_hole": false, 00:22:16.301 "seek_data": false, 00:22:16.301 "copy": true, 00:22:16.301 "nvme_iov_md": false 00:22:16.301 }, 00:22:16.301 "memory_domains": [ 00:22:16.301 { 00:22:16.301 "dma_device_id": "system", 00:22:16.301 "dma_device_type": 1 00:22:16.301 }, 00:22:16.301 { 00:22:16.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.301 "dma_device_type": 2 00:22:16.301 } 00:22:16.301 ], 00:22:16.301 "driver_specific": {} 00:22:16.301 } 00:22:16.301 ] 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.301 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.560 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.560 "name": "Existed_Raid", 00:22:16.560 "uuid": "bcd3fcb8-2665-440c-8f1e-f87169b57fdc", 00:22:16.560 "strip_size_kb": 64, 00:22:16.560 "state": "online", 00:22:16.560 "raid_level": "raid0", 00:22:16.560 "superblock": false, 00:22:16.560 "num_base_bdevs": 4, 00:22:16.560 "num_base_bdevs_discovered": 4, 00:22:16.560 "num_base_bdevs_operational": 4, 00:22:16.560 "base_bdevs_list": [ 00:22:16.560 { 00:22:16.560 "name": "BaseBdev1", 00:22:16.560 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:16.560 "is_configured": true, 00:22:16.560 "data_offset": 0, 00:22:16.560 "data_size": 65536 00:22:16.560 }, 00:22:16.560 { 00:22:16.560 "name": "BaseBdev2", 00:22:16.560 "uuid": "9d7de131-ffb7-4f18-be38-edd49fe2dd09", 00:22:16.560 "is_configured": true, 00:22:16.560 "data_offset": 0, 00:22:16.560 "data_size": 65536 00:22:16.560 }, 00:22:16.560 { 00:22:16.560 "name": "BaseBdev3", 00:22:16.560 "uuid": "25d578c2-aa02-4541-b3a8-9ef4ed965a3b", 00:22:16.560 "is_configured": true, 00:22:16.560 "data_offset": 0, 00:22:16.560 "data_size": 65536 00:22:16.560 }, 00:22:16.560 { 00:22:16.560 "name": "BaseBdev4", 00:22:16.560 "uuid": "c02a667e-e72e-4850-b945-d708edec9877", 00:22:16.560 "is_configured": true, 00:22:16.560 "data_offset": 0, 00:22:16.560 "data_size": 65536 00:22:16.560 } 00:22:16.560 ] 00:22:16.560 }' 00:22:16.560 18:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.560 18:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:16.818 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:17.077 [2024-07-25 18:49:17.519504] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.077 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:17.077 "name": "Existed_Raid", 00:22:17.077 "aliases": [ 00:22:17.077 "bcd3fcb8-2665-440c-8f1e-f87169b57fdc" 00:22:17.077 ], 00:22:17.077 "product_name": "Raid Volume", 00:22:17.077 "block_size": 512, 00:22:17.077 "num_blocks": 262144, 00:22:17.077 "uuid": "bcd3fcb8-2665-440c-8f1e-f87169b57fdc", 00:22:17.077 "assigned_rate_limits": { 00:22:17.077 "rw_ios_per_sec": 0, 00:22:17.077 "rw_mbytes_per_sec": 0, 00:22:17.077 "r_mbytes_per_sec": 0, 00:22:17.077 "w_mbytes_per_sec": 0 00:22:17.077 }, 00:22:17.077 "claimed": false, 00:22:17.077 "zoned": false, 00:22:17.077 "supported_io_types": { 00:22:17.077 "read": true, 00:22:17.077 "write": true, 00:22:17.077 "unmap": true, 00:22:17.077 "flush": true, 00:22:17.077 "reset": true, 00:22:17.077 "nvme_admin": false, 00:22:17.077 "nvme_io": false, 00:22:17.077 "nvme_io_md": false, 00:22:17.077 "write_zeroes": true, 00:22:17.077 "zcopy": false, 00:22:17.077 "get_zone_info": false, 00:22:17.077 "zone_management": false, 00:22:17.077 "zone_append": false, 00:22:17.077 "compare": false, 00:22:17.077 "compare_and_write": false, 00:22:17.077 "abort": false, 00:22:17.077 "seek_hole": false, 00:22:17.077 "seek_data": false, 00:22:17.077 "copy": false, 00:22:17.077 "nvme_iov_md": false 00:22:17.077 }, 00:22:17.077 "memory_domains": [ 00:22:17.077 { 00:22:17.077 "dma_device_id": "system", 00:22:17.077 "dma_device_type": 1 00:22:17.077 }, 00:22:17.077 { 00:22:17.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.077 "dma_device_type": 2 00:22:17.077 }, 00:22:17.077 { 00:22:17.077 "dma_device_id": "system", 00:22:17.077 "dma_device_type": 1 00:22:17.077 }, 00:22:17.077 { 00:22:17.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.077 "dma_device_type": 2 00:22:17.077 }, 00:22:17.077 { 00:22:17.077 "dma_device_id": "system", 00:22:17.077 "dma_device_type": 1 00:22:17.077 }, 00:22:17.077 { 00:22:17.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.078 "dma_device_type": 2 00:22:17.078 }, 00:22:17.078 { 00:22:17.078 "dma_device_id": "system", 00:22:17.078 "dma_device_type": 1 00:22:17.078 }, 00:22:17.078 { 00:22:17.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.078 "dma_device_type": 2 00:22:17.078 } 00:22:17.078 ], 00:22:17.078 "driver_specific": { 00:22:17.078 "raid": { 00:22:17.078 "uuid": "bcd3fcb8-2665-440c-8f1e-f87169b57fdc", 00:22:17.078 "strip_size_kb": 64, 00:22:17.078 "state": "online", 00:22:17.078 "raid_level": "raid0", 00:22:17.078 "superblock": false, 00:22:17.078 "num_base_bdevs": 4, 00:22:17.078 "num_base_bdevs_discovered": 4, 00:22:17.078 "num_base_bdevs_operational": 4, 00:22:17.078 "base_bdevs_list": [ 00:22:17.078 { 00:22:17.078 "name": "BaseBdev1", 00:22:17.078 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:17.078 "is_configured": true, 00:22:17.078 "data_offset": 0, 00:22:17.078 "data_size": 65536 00:22:17.078 }, 00:22:17.078 { 00:22:17.078 "name": "BaseBdev2", 00:22:17.078 "uuid": "9d7de131-ffb7-4f18-be38-edd49fe2dd09", 00:22:17.078 "is_configured": true, 00:22:17.078 "data_offset": 0, 00:22:17.078 "data_size": 65536 00:22:17.078 }, 00:22:17.078 { 00:22:17.078 "name": "BaseBdev3", 00:22:17.078 "uuid": "25d578c2-aa02-4541-b3a8-9ef4ed965a3b", 00:22:17.078 "is_configured": true, 00:22:17.078 "data_offset": 0, 00:22:17.078 "data_size": 65536 00:22:17.078 }, 00:22:17.078 { 00:22:17.078 "name": "BaseBdev4", 00:22:17.078 "uuid": "c02a667e-e72e-4850-b945-d708edec9877", 00:22:17.078 "is_configured": true, 00:22:17.078 "data_offset": 0, 00:22:17.078 "data_size": 65536 00:22:17.078 } 00:22:17.078 ] 00:22:17.078 } 00:22:17.078 } 00:22:17.078 }' 00:22:17.078 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.078 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:17.078 BaseBdev2 00:22:17.078 BaseBdev3 00:22:17.078 BaseBdev4' 00:22:17.078 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:17.078 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:17.078 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:17.336 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:17.336 "name": "BaseBdev1", 00:22:17.336 "aliases": [ 00:22:17.336 "14f267ff-0821-4a4b-910c-d2704939ed30" 00:22:17.336 ], 00:22:17.337 "product_name": "Malloc disk", 00:22:17.337 "block_size": 512, 00:22:17.337 "num_blocks": 65536, 00:22:17.337 "uuid": "14f267ff-0821-4a4b-910c-d2704939ed30", 00:22:17.337 "assigned_rate_limits": { 00:22:17.337 "rw_ios_per_sec": 0, 00:22:17.337 "rw_mbytes_per_sec": 0, 00:22:17.337 "r_mbytes_per_sec": 0, 00:22:17.337 "w_mbytes_per_sec": 0 00:22:17.337 }, 00:22:17.337 "claimed": true, 00:22:17.337 "claim_type": "exclusive_write", 00:22:17.337 "zoned": false, 00:22:17.337 "supported_io_types": { 00:22:17.337 "read": true, 00:22:17.337 "write": true, 00:22:17.337 "unmap": true, 00:22:17.337 "flush": true, 00:22:17.337 "reset": true, 00:22:17.337 "nvme_admin": false, 00:22:17.337 "nvme_io": false, 00:22:17.337 "nvme_io_md": false, 00:22:17.337 "write_zeroes": true, 00:22:17.337 "zcopy": true, 00:22:17.337 "get_zone_info": false, 00:22:17.337 "zone_management": false, 00:22:17.337 "zone_append": false, 00:22:17.337 "compare": false, 00:22:17.337 "compare_and_write": false, 00:22:17.337 "abort": true, 00:22:17.337 "seek_hole": false, 00:22:17.337 "seek_data": false, 00:22:17.337 "copy": true, 00:22:17.337 "nvme_iov_md": false 00:22:17.337 }, 00:22:17.337 "memory_domains": [ 00:22:17.337 { 00:22:17.337 "dma_device_id": "system", 00:22:17.337 "dma_device_type": 1 00:22:17.337 }, 00:22:17.337 { 00:22:17.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.337 "dma_device_type": 2 00:22:17.337 } 00:22:17.337 ], 00:22:17.337 "driver_specific": {} 00:22:17.337 }' 00:22:17.337 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.337 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.337 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:17.337 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.337 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.337 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:17.337 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.595 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.595 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:17.595 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.595 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.595 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:17.595 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:17.595 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:17.595 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:17.854 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:17.854 "name": "BaseBdev2", 00:22:17.854 "aliases": [ 00:22:17.854 "9d7de131-ffb7-4f18-be38-edd49fe2dd09" 00:22:17.854 ], 00:22:17.854 "product_name": "Malloc disk", 00:22:17.854 "block_size": 512, 00:22:17.854 "num_blocks": 65536, 00:22:17.854 "uuid": "9d7de131-ffb7-4f18-be38-edd49fe2dd09", 00:22:17.854 "assigned_rate_limits": { 00:22:17.854 "rw_ios_per_sec": 0, 00:22:17.854 "rw_mbytes_per_sec": 0, 00:22:17.854 "r_mbytes_per_sec": 0, 00:22:17.854 "w_mbytes_per_sec": 0 00:22:17.854 }, 00:22:17.854 "claimed": true, 00:22:17.854 "claim_type": "exclusive_write", 00:22:17.854 "zoned": false, 00:22:17.854 "supported_io_types": { 00:22:17.854 "read": true, 00:22:17.854 "write": true, 00:22:17.854 "unmap": true, 00:22:17.854 "flush": true, 00:22:17.854 "reset": true, 00:22:17.854 "nvme_admin": false, 00:22:17.854 "nvme_io": false, 00:22:17.854 "nvme_io_md": false, 00:22:17.854 "write_zeroes": true, 00:22:17.854 "zcopy": true, 00:22:17.854 "get_zone_info": false, 00:22:17.854 "zone_management": false, 00:22:17.854 "zone_append": false, 00:22:17.854 "compare": false, 00:22:17.854 "compare_and_write": false, 00:22:17.854 "abort": true, 00:22:17.854 "seek_hole": false, 00:22:17.854 "seek_data": false, 00:22:17.854 "copy": true, 00:22:17.854 "nvme_iov_md": false 00:22:17.854 }, 00:22:17.854 "memory_domains": [ 00:22:17.854 { 00:22:17.854 "dma_device_id": "system", 00:22:17.854 "dma_device_type": 1 00:22:17.854 }, 00:22:17.854 { 00:22:17.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.854 "dma_device_type": 2 00:22:17.854 } 00:22:17.854 ], 00:22:17.854 "driver_specific": {} 00:22:17.854 }' 00:22:17.854 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.854 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.854 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:18.113 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.372 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.372 "name": "BaseBdev3", 00:22:18.372 "aliases": [ 00:22:18.372 "25d578c2-aa02-4541-b3a8-9ef4ed965a3b" 00:22:18.372 ], 00:22:18.372 "product_name": "Malloc disk", 00:22:18.372 "block_size": 512, 00:22:18.372 "num_blocks": 65536, 00:22:18.372 "uuid": "25d578c2-aa02-4541-b3a8-9ef4ed965a3b", 00:22:18.372 "assigned_rate_limits": { 00:22:18.372 "rw_ios_per_sec": 0, 00:22:18.372 "rw_mbytes_per_sec": 0, 00:22:18.372 "r_mbytes_per_sec": 0, 00:22:18.372 "w_mbytes_per_sec": 0 00:22:18.372 }, 00:22:18.372 "claimed": true, 00:22:18.372 "claim_type": "exclusive_write", 00:22:18.372 "zoned": false, 00:22:18.372 "supported_io_types": { 00:22:18.372 "read": true, 00:22:18.372 "write": true, 00:22:18.372 "unmap": true, 00:22:18.372 "flush": true, 00:22:18.372 "reset": true, 00:22:18.372 "nvme_admin": false, 00:22:18.372 "nvme_io": false, 00:22:18.372 "nvme_io_md": false, 00:22:18.372 "write_zeroes": true, 00:22:18.372 "zcopy": true, 00:22:18.372 "get_zone_info": false, 00:22:18.372 "zone_management": false, 00:22:18.372 "zone_append": false, 00:22:18.372 "compare": false, 00:22:18.372 "compare_and_write": false, 00:22:18.372 "abort": true, 00:22:18.372 "seek_hole": false, 00:22:18.372 "seek_data": false, 00:22:18.372 "copy": true, 00:22:18.372 "nvme_iov_md": false 00:22:18.372 }, 00:22:18.372 "memory_domains": [ 00:22:18.372 { 00:22:18.372 "dma_device_id": "system", 00:22:18.372 "dma_device_type": 1 00:22:18.372 }, 00:22:18.372 { 00:22:18.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.372 "dma_device_type": 2 00:22:18.372 } 00:22:18.372 ], 00:22:18.372 "driver_specific": {} 00:22:18.372 }' 00:22:18.372 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.631 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:18.631 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.889 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.889 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:18.889 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.889 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:18.889 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.148 "name": "BaseBdev4", 00:22:19.148 "aliases": [ 00:22:19.148 "c02a667e-e72e-4850-b945-d708edec9877" 00:22:19.148 ], 00:22:19.148 "product_name": "Malloc disk", 00:22:19.148 "block_size": 512, 00:22:19.148 "num_blocks": 65536, 00:22:19.148 "uuid": "c02a667e-e72e-4850-b945-d708edec9877", 00:22:19.148 "assigned_rate_limits": { 00:22:19.148 "rw_ios_per_sec": 0, 00:22:19.148 "rw_mbytes_per_sec": 0, 00:22:19.148 "r_mbytes_per_sec": 0, 00:22:19.148 "w_mbytes_per_sec": 0 00:22:19.148 }, 00:22:19.148 "claimed": true, 00:22:19.148 "claim_type": "exclusive_write", 00:22:19.148 "zoned": false, 00:22:19.148 "supported_io_types": { 00:22:19.148 "read": true, 00:22:19.148 "write": true, 00:22:19.148 "unmap": true, 00:22:19.148 "flush": true, 00:22:19.148 "reset": true, 00:22:19.148 "nvme_admin": false, 00:22:19.148 "nvme_io": false, 00:22:19.148 "nvme_io_md": false, 00:22:19.148 "write_zeroes": true, 00:22:19.148 "zcopy": true, 00:22:19.148 "get_zone_info": false, 00:22:19.148 "zone_management": false, 00:22:19.148 "zone_append": false, 00:22:19.148 "compare": false, 00:22:19.148 "compare_and_write": false, 00:22:19.148 "abort": true, 00:22:19.148 "seek_hole": false, 00:22:19.148 "seek_data": false, 00:22:19.148 "copy": true, 00:22:19.148 "nvme_iov_md": false 00:22:19.148 }, 00:22:19.148 "memory_domains": [ 00:22:19.148 { 00:22:19.148 "dma_device_id": "system", 00:22:19.148 "dma_device_type": 1 00:22:19.148 }, 00:22:19.148 { 00:22:19.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.148 "dma_device_type": 2 00:22:19.148 } 00:22:19.148 ], 00:22:19.148 "driver_specific": {} 00:22:19.148 }' 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.148 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.407 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.407 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.407 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.407 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.407 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.407 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:19.666 [2024-07-25 18:49:20.030461] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:19.666 [2024-07-25 18:49:20.030698] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:19.666 [2024-07-25 18:49:20.030895] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.666 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.925 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.925 "name": "Existed_Raid", 00:22:19.925 "uuid": "bcd3fcb8-2665-440c-8f1e-f87169b57fdc", 00:22:19.925 "strip_size_kb": 64, 00:22:19.925 "state": "offline", 00:22:19.925 "raid_level": "raid0", 00:22:19.925 "superblock": false, 00:22:19.925 "num_base_bdevs": 4, 00:22:19.925 "num_base_bdevs_discovered": 3, 00:22:19.925 "num_base_bdevs_operational": 3, 00:22:19.925 "base_bdevs_list": [ 00:22:19.925 { 00:22:19.925 "name": null, 00:22:19.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.925 "is_configured": false, 00:22:19.925 "data_offset": 0, 00:22:19.925 "data_size": 65536 00:22:19.925 }, 00:22:19.925 { 00:22:19.925 "name": "BaseBdev2", 00:22:19.925 "uuid": "9d7de131-ffb7-4f18-be38-edd49fe2dd09", 00:22:19.925 "is_configured": true, 00:22:19.925 "data_offset": 0, 00:22:19.925 "data_size": 65536 00:22:19.925 }, 00:22:19.925 { 00:22:19.925 "name": "BaseBdev3", 00:22:19.925 "uuid": "25d578c2-aa02-4541-b3a8-9ef4ed965a3b", 00:22:19.925 "is_configured": true, 00:22:19.925 "data_offset": 0, 00:22:19.925 "data_size": 65536 00:22:19.925 }, 00:22:19.925 { 00:22:19.925 "name": "BaseBdev4", 00:22:19.925 "uuid": "c02a667e-e72e-4850-b945-d708edec9877", 00:22:19.925 "is_configured": true, 00:22:19.925 "data_offset": 0, 00:22:19.925 "data_size": 65536 00:22:19.925 } 00:22:19.925 ] 00:22:19.925 }' 00:22:19.925 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.925 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.492 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:20.492 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:20.492 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.492 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:20.751 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:20.751 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:20.751 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:20.751 [2024-07-25 18:49:21.325718] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:21.009 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:21.009 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.009 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.009 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.268 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:21.268 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.268 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:21.526 [2024-07-25 18:49:21.865392] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:21.526 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:21.526 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.526 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.526 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.785 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:21.785 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.785 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:21.785 [2024-07-25 18:49:22.308648] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:21.785 [2024-07-25 18:49:22.308897] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:22:22.044 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:22.044 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.044 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.044 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:22.304 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:22.304 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:22.304 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:22.304 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:22.304 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:22.304 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:22.563 BaseBdev2 00:22:22.563 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:22.563 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:22.563 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:22.563 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:22.563 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:22.563 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:22.563 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:22.822 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:22.822 [ 00:22:22.822 { 00:22:22.822 "name": "BaseBdev2", 00:22:22.822 "aliases": [ 00:22:22.822 "2cd5de17-290f-47e3-bf91-d3cb2260eff0" 00:22:22.822 ], 00:22:22.822 "product_name": "Malloc disk", 00:22:22.822 "block_size": 512, 00:22:22.822 "num_blocks": 65536, 00:22:22.822 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:22.822 "assigned_rate_limits": { 00:22:22.822 "rw_ios_per_sec": 0, 00:22:22.822 "rw_mbytes_per_sec": 0, 00:22:22.822 "r_mbytes_per_sec": 0, 00:22:22.822 "w_mbytes_per_sec": 0 00:22:22.822 }, 00:22:22.822 "claimed": false, 00:22:22.822 "zoned": false, 00:22:22.822 "supported_io_types": { 00:22:22.822 "read": true, 00:22:22.822 "write": true, 00:22:22.822 "unmap": true, 00:22:22.822 "flush": true, 00:22:22.822 "reset": true, 00:22:22.822 "nvme_admin": false, 00:22:22.822 "nvme_io": false, 00:22:22.822 "nvme_io_md": false, 00:22:22.822 "write_zeroes": true, 00:22:22.822 "zcopy": true, 00:22:22.822 "get_zone_info": false, 00:22:22.822 "zone_management": false, 00:22:22.822 "zone_append": false, 00:22:22.822 "compare": false, 00:22:22.822 "compare_and_write": false, 00:22:22.822 "abort": true, 00:22:22.822 "seek_hole": false, 00:22:22.822 "seek_data": false, 00:22:22.822 "copy": true, 00:22:22.822 "nvme_iov_md": false 00:22:22.822 }, 00:22:22.822 "memory_domains": [ 00:22:22.822 { 00:22:22.822 "dma_device_id": "system", 00:22:22.822 "dma_device_type": 1 00:22:22.822 }, 00:22:22.822 { 00:22:22.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.822 "dma_device_type": 2 00:22:22.822 } 00:22:22.822 ], 00:22:22.822 "driver_specific": {} 00:22:22.822 } 00:22:22.822 ] 00:22:22.822 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:22.822 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:22.822 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:22.822 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:23.080 BaseBdev3 00:22:23.080 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:23.080 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:23.080 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:23.080 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:23.080 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:23.080 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:23.080 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.338 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:23.596 [ 00:22:23.596 { 00:22:23.596 "name": "BaseBdev3", 00:22:23.596 "aliases": [ 00:22:23.596 "2cefa589-9a07-4602-be17-c93986f25ffa" 00:22:23.596 ], 00:22:23.596 "product_name": "Malloc disk", 00:22:23.596 "block_size": 512, 00:22:23.597 "num_blocks": 65536, 00:22:23.597 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:23.597 "assigned_rate_limits": { 00:22:23.597 "rw_ios_per_sec": 0, 00:22:23.597 "rw_mbytes_per_sec": 0, 00:22:23.597 "r_mbytes_per_sec": 0, 00:22:23.597 "w_mbytes_per_sec": 0 00:22:23.597 }, 00:22:23.597 "claimed": false, 00:22:23.597 "zoned": false, 00:22:23.597 "supported_io_types": { 00:22:23.597 "read": true, 00:22:23.597 "write": true, 00:22:23.597 "unmap": true, 00:22:23.597 "flush": true, 00:22:23.597 "reset": true, 00:22:23.597 "nvme_admin": false, 00:22:23.597 "nvme_io": false, 00:22:23.597 "nvme_io_md": false, 00:22:23.597 "write_zeroes": true, 00:22:23.597 "zcopy": true, 00:22:23.597 "get_zone_info": false, 00:22:23.597 "zone_management": false, 00:22:23.597 "zone_append": false, 00:22:23.597 "compare": false, 00:22:23.597 "compare_and_write": false, 00:22:23.597 "abort": true, 00:22:23.597 "seek_hole": false, 00:22:23.597 "seek_data": false, 00:22:23.597 "copy": true, 00:22:23.597 "nvme_iov_md": false 00:22:23.597 }, 00:22:23.597 "memory_domains": [ 00:22:23.597 { 00:22:23.597 "dma_device_id": "system", 00:22:23.597 "dma_device_type": 1 00:22:23.597 }, 00:22:23.597 { 00:22:23.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.597 "dma_device_type": 2 00:22:23.597 } 00:22:23.597 ], 00:22:23.597 "driver_specific": {} 00:22:23.597 } 00:22:23.597 ] 00:22:23.597 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:23.597 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:23.597 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:23.597 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:23.855 BaseBdev4 00:22:23.855 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:23.855 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:23.855 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:23.855 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:23.856 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:23.856 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:23.856 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.856 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:24.115 [ 00:22:24.115 { 00:22:24.115 "name": "BaseBdev4", 00:22:24.115 "aliases": [ 00:22:24.115 "11e44046-41a5-408d-ad77-867ef2bedbb7" 00:22:24.115 ], 00:22:24.115 "product_name": "Malloc disk", 00:22:24.115 "block_size": 512, 00:22:24.115 "num_blocks": 65536, 00:22:24.115 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:24.115 "assigned_rate_limits": { 00:22:24.115 "rw_ios_per_sec": 0, 00:22:24.115 "rw_mbytes_per_sec": 0, 00:22:24.115 "r_mbytes_per_sec": 0, 00:22:24.115 "w_mbytes_per_sec": 0 00:22:24.115 }, 00:22:24.115 "claimed": false, 00:22:24.115 "zoned": false, 00:22:24.115 "supported_io_types": { 00:22:24.115 "read": true, 00:22:24.115 "write": true, 00:22:24.115 "unmap": true, 00:22:24.115 "flush": true, 00:22:24.115 "reset": true, 00:22:24.115 "nvme_admin": false, 00:22:24.115 "nvme_io": false, 00:22:24.115 "nvme_io_md": false, 00:22:24.115 "write_zeroes": true, 00:22:24.115 "zcopy": true, 00:22:24.115 "get_zone_info": false, 00:22:24.115 "zone_management": false, 00:22:24.115 "zone_append": false, 00:22:24.115 "compare": false, 00:22:24.115 "compare_and_write": false, 00:22:24.115 "abort": true, 00:22:24.115 "seek_hole": false, 00:22:24.115 "seek_data": false, 00:22:24.115 "copy": true, 00:22:24.115 "nvme_iov_md": false 00:22:24.115 }, 00:22:24.115 "memory_domains": [ 00:22:24.115 { 00:22:24.115 "dma_device_id": "system", 00:22:24.115 "dma_device_type": 1 00:22:24.115 }, 00:22:24.115 { 00:22:24.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.115 "dma_device_type": 2 00:22:24.115 } 00:22:24.115 ], 00:22:24.115 "driver_specific": {} 00:22:24.115 } 00:22:24.115 ] 00:22:24.115 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:24.115 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:24.115 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:24.115 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:24.375 [2024-07-25 18:49:24.720508] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:24.375 [2024-07-25 18:49:24.720747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:24.375 [2024-07-25 18:49:24.721017] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.375 [2024-07-25 18:49:24.723499] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:24.375 [2024-07-25 18:49:24.723798] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.375 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.634 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.634 "name": "Existed_Raid", 00:22:24.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.634 "strip_size_kb": 64, 00:22:24.634 "state": "configuring", 00:22:24.634 "raid_level": "raid0", 00:22:24.634 "superblock": false, 00:22:24.634 "num_base_bdevs": 4, 00:22:24.634 "num_base_bdevs_discovered": 3, 00:22:24.634 "num_base_bdevs_operational": 4, 00:22:24.634 "base_bdevs_list": [ 00:22:24.634 { 00:22:24.634 "name": "BaseBdev1", 00:22:24.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.634 "is_configured": false, 00:22:24.634 "data_offset": 0, 00:22:24.634 "data_size": 0 00:22:24.634 }, 00:22:24.634 { 00:22:24.634 "name": "BaseBdev2", 00:22:24.634 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:24.634 "is_configured": true, 00:22:24.634 "data_offset": 0, 00:22:24.634 "data_size": 65536 00:22:24.634 }, 00:22:24.634 { 00:22:24.634 "name": "BaseBdev3", 00:22:24.634 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:24.634 "is_configured": true, 00:22:24.634 "data_offset": 0, 00:22:24.634 "data_size": 65536 00:22:24.634 }, 00:22:24.634 { 00:22:24.634 "name": "BaseBdev4", 00:22:24.634 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:24.634 "is_configured": true, 00:22:24.634 "data_offset": 0, 00:22:24.634 "data_size": 65536 00:22:24.634 } 00:22:24.634 ] 00:22:24.634 }' 00:22:24.634 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.634 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.202 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:25.202 [2024-07-25 18:49:25.684751] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:25.202 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:25.202 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.202 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:25.202 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.203 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.462 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.462 "name": "Existed_Raid", 00:22:25.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.462 "strip_size_kb": 64, 00:22:25.462 "state": "configuring", 00:22:25.462 "raid_level": "raid0", 00:22:25.462 "superblock": false, 00:22:25.462 "num_base_bdevs": 4, 00:22:25.462 "num_base_bdevs_discovered": 2, 00:22:25.462 "num_base_bdevs_operational": 4, 00:22:25.462 "base_bdevs_list": [ 00:22:25.462 { 00:22:25.462 "name": "BaseBdev1", 00:22:25.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.462 "is_configured": false, 00:22:25.462 "data_offset": 0, 00:22:25.462 "data_size": 0 00:22:25.462 }, 00:22:25.462 { 00:22:25.462 "name": null, 00:22:25.462 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:25.462 "is_configured": false, 00:22:25.462 "data_offset": 0, 00:22:25.462 "data_size": 65536 00:22:25.462 }, 00:22:25.462 { 00:22:25.462 "name": "BaseBdev3", 00:22:25.462 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:25.462 "is_configured": true, 00:22:25.462 "data_offset": 0, 00:22:25.462 "data_size": 65536 00:22:25.462 }, 00:22:25.462 { 00:22:25.462 "name": "BaseBdev4", 00:22:25.462 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:25.462 "is_configured": true, 00:22:25.462 "data_offset": 0, 00:22:25.462 "data_size": 65536 00:22:25.462 } 00:22:25.462 ] 00:22:25.462 }' 00:22:25.462 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.462 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.030 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:26.030 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:26.289 [2024-07-25 18:49:26.834182] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:26.289 BaseBdev1 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:26.289 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:26.549 18:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:26.808 [ 00:22:26.808 { 00:22:26.808 "name": "BaseBdev1", 00:22:26.808 "aliases": [ 00:22:26.808 "dd36333e-a485-4117-8d01-9c2a6b600daa" 00:22:26.808 ], 00:22:26.808 "product_name": "Malloc disk", 00:22:26.808 "block_size": 512, 00:22:26.808 "num_blocks": 65536, 00:22:26.808 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:26.808 "assigned_rate_limits": { 00:22:26.808 "rw_ios_per_sec": 0, 00:22:26.808 "rw_mbytes_per_sec": 0, 00:22:26.808 "r_mbytes_per_sec": 0, 00:22:26.808 "w_mbytes_per_sec": 0 00:22:26.808 }, 00:22:26.808 "claimed": true, 00:22:26.808 "claim_type": "exclusive_write", 00:22:26.808 "zoned": false, 00:22:26.808 "supported_io_types": { 00:22:26.808 "read": true, 00:22:26.808 "write": true, 00:22:26.808 "unmap": true, 00:22:26.808 "flush": true, 00:22:26.808 "reset": true, 00:22:26.808 "nvme_admin": false, 00:22:26.808 "nvme_io": false, 00:22:26.808 "nvme_io_md": false, 00:22:26.808 "write_zeroes": true, 00:22:26.808 "zcopy": true, 00:22:26.808 "get_zone_info": false, 00:22:26.808 "zone_management": false, 00:22:26.808 "zone_append": false, 00:22:26.808 "compare": false, 00:22:26.808 "compare_and_write": false, 00:22:26.808 "abort": true, 00:22:26.808 "seek_hole": false, 00:22:26.808 "seek_data": false, 00:22:26.808 "copy": true, 00:22:26.808 "nvme_iov_md": false 00:22:26.808 }, 00:22:26.808 "memory_domains": [ 00:22:26.808 { 00:22:26.808 "dma_device_id": "system", 00:22:26.808 "dma_device_type": 1 00:22:26.808 }, 00:22:26.808 { 00:22:26.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.808 "dma_device_type": 2 00:22:26.808 } 00:22:26.808 ], 00:22:26.808 "driver_specific": {} 00:22:26.808 } 00:22:26.808 ] 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.808 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.067 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.067 "name": "Existed_Raid", 00:22:27.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.067 "strip_size_kb": 64, 00:22:27.067 "state": "configuring", 00:22:27.067 "raid_level": "raid0", 00:22:27.067 "superblock": false, 00:22:27.067 "num_base_bdevs": 4, 00:22:27.067 "num_base_bdevs_discovered": 3, 00:22:27.067 "num_base_bdevs_operational": 4, 00:22:27.067 "base_bdevs_list": [ 00:22:27.067 { 00:22:27.067 "name": "BaseBdev1", 00:22:27.067 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:27.067 "is_configured": true, 00:22:27.067 "data_offset": 0, 00:22:27.067 "data_size": 65536 00:22:27.067 }, 00:22:27.067 { 00:22:27.067 "name": null, 00:22:27.067 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:27.067 "is_configured": false, 00:22:27.067 "data_offset": 0, 00:22:27.067 "data_size": 65536 00:22:27.067 }, 00:22:27.067 { 00:22:27.067 "name": "BaseBdev3", 00:22:27.067 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:27.067 "is_configured": true, 00:22:27.067 "data_offset": 0, 00:22:27.067 "data_size": 65536 00:22:27.067 }, 00:22:27.067 { 00:22:27.067 "name": "BaseBdev4", 00:22:27.067 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:27.067 "is_configured": true, 00:22:27.067 "data_offset": 0, 00:22:27.067 "data_size": 65536 00:22:27.067 } 00:22:27.067 ] 00:22:27.067 }' 00:22:27.067 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.067 18:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.635 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.635 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:27.893 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:27.894 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:28.156 [2024-07-25 18:49:28.530594] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.156 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.423 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.423 "name": "Existed_Raid", 00:22:28.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.423 "strip_size_kb": 64, 00:22:28.423 "state": "configuring", 00:22:28.423 "raid_level": "raid0", 00:22:28.423 "superblock": false, 00:22:28.423 "num_base_bdevs": 4, 00:22:28.423 "num_base_bdevs_discovered": 2, 00:22:28.423 "num_base_bdevs_operational": 4, 00:22:28.423 "base_bdevs_list": [ 00:22:28.423 { 00:22:28.423 "name": "BaseBdev1", 00:22:28.423 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:28.423 "is_configured": true, 00:22:28.423 "data_offset": 0, 00:22:28.423 "data_size": 65536 00:22:28.423 }, 00:22:28.423 { 00:22:28.423 "name": null, 00:22:28.423 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:28.423 "is_configured": false, 00:22:28.423 "data_offset": 0, 00:22:28.423 "data_size": 65536 00:22:28.423 }, 00:22:28.423 { 00:22:28.423 "name": null, 00:22:28.423 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:28.423 "is_configured": false, 00:22:28.423 "data_offset": 0, 00:22:28.423 "data_size": 65536 00:22:28.423 }, 00:22:28.423 { 00:22:28.423 "name": "BaseBdev4", 00:22:28.423 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:28.423 "is_configured": true, 00:22:28.423 "data_offset": 0, 00:22:28.423 "data_size": 65536 00:22:28.423 } 00:22:28.423 ] 00:22:28.423 }' 00:22:28.423 18:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.423 18:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.690 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.690 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:28.949 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:28.949 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:29.208 [2024-07-25 18:49:29.598630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.209 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.467 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.467 "name": "Existed_Raid", 00:22:29.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.467 "strip_size_kb": 64, 00:22:29.467 "state": "configuring", 00:22:29.467 "raid_level": "raid0", 00:22:29.467 "superblock": false, 00:22:29.467 "num_base_bdevs": 4, 00:22:29.467 "num_base_bdevs_discovered": 3, 00:22:29.467 "num_base_bdevs_operational": 4, 00:22:29.467 "base_bdevs_list": [ 00:22:29.467 { 00:22:29.467 "name": "BaseBdev1", 00:22:29.467 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:29.467 "is_configured": true, 00:22:29.467 "data_offset": 0, 00:22:29.467 "data_size": 65536 00:22:29.467 }, 00:22:29.467 { 00:22:29.467 "name": null, 00:22:29.467 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:29.467 "is_configured": false, 00:22:29.467 "data_offset": 0, 00:22:29.467 "data_size": 65536 00:22:29.467 }, 00:22:29.467 { 00:22:29.467 "name": "BaseBdev3", 00:22:29.467 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:29.467 "is_configured": true, 00:22:29.467 "data_offset": 0, 00:22:29.467 "data_size": 65536 00:22:29.467 }, 00:22:29.467 { 00:22:29.467 "name": "BaseBdev4", 00:22:29.467 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:29.467 "is_configured": true, 00:22:29.467 "data_offset": 0, 00:22:29.467 "data_size": 65536 00:22:29.467 } 00:22:29.467 ] 00:22:29.467 }' 00:22:29.467 18:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.467 18:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.035 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.035 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:30.294 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:30.294 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:30.553 [2024-07-25 18:49:30.886238] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.553 18:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.812 18:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.812 "name": "Existed_Raid", 00:22:30.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.812 "strip_size_kb": 64, 00:22:30.812 "state": "configuring", 00:22:30.812 "raid_level": "raid0", 00:22:30.812 "superblock": false, 00:22:30.812 "num_base_bdevs": 4, 00:22:30.812 "num_base_bdevs_discovered": 2, 00:22:30.812 "num_base_bdevs_operational": 4, 00:22:30.812 "base_bdevs_list": [ 00:22:30.812 { 00:22:30.812 "name": null, 00:22:30.812 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:30.812 "is_configured": false, 00:22:30.812 "data_offset": 0, 00:22:30.812 "data_size": 65536 00:22:30.812 }, 00:22:30.812 { 00:22:30.812 "name": null, 00:22:30.812 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:30.812 "is_configured": false, 00:22:30.812 "data_offset": 0, 00:22:30.812 "data_size": 65536 00:22:30.812 }, 00:22:30.812 { 00:22:30.812 "name": "BaseBdev3", 00:22:30.812 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:30.812 "is_configured": true, 00:22:30.812 "data_offset": 0, 00:22:30.812 "data_size": 65536 00:22:30.812 }, 00:22:30.812 { 00:22:30.812 "name": "BaseBdev4", 00:22:30.812 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:30.812 "is_configured": true, 00:22:30.812 "data_offset": 0, 00:22:30.812 "data_size": 65536 00:22:30.812 } 00:22:30.812 ] 00:22:30.812 }' 00:22:30.812 18:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.812 18:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.380 18:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:31.380 18:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.380 18:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:31.380 18:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:31.639 [2024-07-25 18:49:32.042489] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.639 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.898 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.898 "name": "Existed_Raid", 00:22:31.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.898 "strip_size_kb": 64, 00:22:31.898 "state": "configuring", 00:22:31.898 "raid_level": "raid0", 00:22:31.898 "superblock": false, 00:22:31.898 "num_base_bdevs": 4, 00:22:31.898 "num_base_bdevs_discovered": 3, 00:22:31.898 "num_base_bdevs_operational": 4, 00:22:31.898 "base_bdevs_list": [ 00:22:31.898 { 00:22:31.898 "name": null, 00:22:31.898 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:31.898 "is_configured": false, 00:22:31.898 "data_offset": 0, 00:22:31.898 "data_size": 65536 00:22:31.898 }, 00:22:31.898 { 00:22:31.898 "name": "BaseBdev2", 00:22:31.898 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:31.898 "is_configured": true, 00:22:31.898 "data_offset": 0, 00:22:31.898 "data_size": 65536 00:22:31.898 }, 00:22:31.898 { 00:22:31.898 "name": "BaseBdev3", 00:22:31.898 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:31.898 "is_configured": true, 00:22:31.898 "data_offset": 0, 00:22:31.898 "data_size": 65536 00:22:31.898 }, 00:22:31.898 { 00:22:31.898 "name": "BaseBdev4", 00:22:31.898 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:31.898 "is_configured": true, 00:22:31.898 "data_offset": 0, 00:22:31.898 "data_size": 65536 00:22:31.898 } 00:22:31.898 ] 00:22:31.898 }' 00:22:31.898 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.898 18:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.465 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.465 18:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:32.723 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:32.723 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.723 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:32.980 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u dd36333e-a485-4117-8d01-9c2a6b600daa 00:22:33.238 [2024-07-25 18:49:33.622625] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:33.238 [2024-07-25 18:49:33.622838] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:22:33.238 [2024-07-25 18:49:33.622879] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:33.238 [2024-07-25 18:49:33.623076] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:33.238 [2024-07-25 18:49:33.623448] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:22:33.238 [2024-07-25 18:49:33.623489] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:22:33.238 [2024-07-25 18:49:33.623840] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.238 NewBaseBdev 00:22:33.238 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:33.238 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:33.238 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:33.238 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:33.238 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:33.238 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:33.238 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:33.496 [ 00:22:33.496 { 00:22:33.496 "name": "NewBaseBdev", 00:22:33.496 "aliases": [ 00:22:33.496 "dd36333e-a485-4117-8d01-9c2a6b600daa" 00:22:33.496 ], 00:22:33.496 "product_name": "Malloc disk", 00:22:33.496 "block_size": 512, 00:22:33.496 "num_blocks": 65536, 00:22:33.496 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:33.496 "assigned_rate_limits": { 00:22:33.496 "rw_ios_per_sec": 0, 00:22:33.496 "rw_mbytes_per_sec": 0, 00:22:33.496 "r_mbytes_per_sec": 0, 00:22:33.496 "w_mbytes_per_sec": 0 00:22:33.496 }, 00:22:33.496 "claimed": true, 00:22:33.496 "claim_type": "exclusive_write", 00:22:33.496 "zoned": false, 00:22:33.496 "supported_io_types": { 00:22:33.496 "read": true, 00:22:33.496 "write": true, 00:22:33.496 "unmap": true, 00:22:33.496 "flush": true, 00:22:33.496 "reset": true, 00:22:33.496 "nvme_admin": false, 00:22:33.496 "nvme_io": false, 00:22:33.496 "nvme_io_md": false, 00:22:33.496 "write_zeroes": true, 00:22:33.496 "zcopy": true, 00:22:33.496 "get_zone_info": false, 00:22:33.496 "zone_management": false, 00:22:33.496 "zone_append": false, 00:22:33.496 "compare": false, 00:22:33.496 "compare_and_write": false, 00:22:33.496 "abort": true, 00:22:33.496 "seek_hole": false, 00:22:33.496 "seek_data": false, 00:22:33.496 "copy": true, 00:22:33.496 "nvme_iov_md": false 00:22:33.496 }, 00:22:33.496 "memory_domains": [ 00:22:33.496 { 00:22:33.496 "dma_device_id": "system", 00:22:33.496 "dma_device_type": 1 00:22:33.496 }, 00:22:33.496 { 00:22:33.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.496 "dma_device_type": 2 00:22:33.496 } 00:22:33.496 ], 00:22:33.496 "driver_specific": {} 00:22:33.496 } 00:22:33.496 ] 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.496 18:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.754 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.754 "name": "Existed_Raid", 00:22:33.754 "uuid": "20bbe05a-0df3-4032-bc00-70071f6a8fe3", 00:22:33.754 "strip_size_kb": 64, 00:22:33.754 "state": "online", 00:22:33.754 "raid_level": "raid0", 00:22:33.754 "superblock": false, 00:22:33.754 "num_base_bdevs": 4, 00:22:33.754 "num_base_bdevs_discovered": 4, 00:22:33.754 "num_base_bdevs_operational": 4, 00:22:33.754 "base_bdevs_list": [ 00:22:33.754 { 00:22:33.754 "name": "NewBaseBdev", 00:22:33.754 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:33.754 "is_configured": true, 00:22:33.754 "data_offset": 0, 00:22:33.754 "data_size": 65536 00:22:33.754 }, 00:22:33.754 { 00:22:33.754 "name": "BaseBdev2", 00:22:33.754 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:33.754 "is_configured": true, 00:22:33.754 "data_offset": 0, 00:22:33.754 "data_size": 65536 00:22:33.754 }, 00:22:33.754 { 00:22:33.754 "name": "BaseBdev3", 00:22:33.754 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:33.754 "is_configured": true, 00:22:33.754 "data_offset": 0, 00:22:33.754 "data_size": 65536 00:22:33.754 }, 00:22:33.754 { 00:22:33.754 "name": "BaseBdev4", 00:22:33.754 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:33.754 "is_configured": true, 00:22:33.754 "data_offset": 0, 00:22:33.754 "data_size": 65536 00:22:33.754 } 00:22:33.754 ] 00:22:33.754 }' 00:22:33.754 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.754 18:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:34.321 18:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:34.580 [2024-07-25 18:49:35.030639] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.580 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:34.580 "name": "Existed_Raid", 00:22:34.580 "aliases": [ 00:22:34.580 "20bbe05a-0df3-4032-bc00-70071f6a8fe3" 00:22:34.580 ], 00:22:34.580 "product_name": "Raid Volume", 00:22:34.580 "block_size": 512, 00:22:34.580 "num_blocks": 262144, 00:22:34.580 "uuid": "20bbe05a-0df3-4032-bc00-70071f6a8fe3", 00:22:34.580 "assigned_rate_limits": { 00:22:34.580 "rw_ios_per_sec": 0, 00:22:34.580 "rw_mbytes_per_sec": 0, 00:22:34.580 "r_mbytes_per_sec": 0, 00:22:34.580 "w_mbytes_per_sec": 0 00:22:34.580 }, 00:22:34.580 "claimed": false, 00:22:34.580 "zoned": false, 00:22:34.580 "supported_io_types": { 00:22:34.580 "read": true, 00:22:34.580 "write": true, 00:22:34.580 "unmap": true, 00:22:34.580 "flush": true, 00:22:34.580 "reset": true, 00:22:34.580 "nvme_admin": false, 00:22:34.580 "nvme_io": false, 00:22:34.580 "nvme_io_md": false, 00:22:34.580 "write_zeroes": true, 00:22:34.580 "zcopy": false, 00:22:34.580 "get_zone_info": false, 00:22:34.580 "zone_management": false, 00:22:34.580 "zone_append": false, 00:22:34.580 "compare": false, 00:22:34.580 "compare_and_write": false, 00:22:34.580 "abort": false, 00:22:34.580 "seek_hole": false, 00:22:34.580 "seek_data": false, 00:22:34.580 "copy": false, 00:22:34.580 "nvme_iov_md": false 00:22:34.580 }, 00:22:34.580 "memory_domains": [ 00:22:34.580 { 00:22:34.580 "dma_device_id": "system", 00:22:34.580 "dma_device_type": 1 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.580 "dma_device_type": 2 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "dma_device_id": "system", 00:22:34.580 "dma_device_type": 1 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.580 "dma_device_type": 2 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "dma_device_id": "system", 00:22:34.580 "dma_device_type": 1 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.580 "dma_device_type": 2 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "dma_device_id": "system", 00:22:34.580 "dma_device_type": 1 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.580 "dma_device_type": 2 00:22:34.580 } 00:22:34.580 ], 00:22:34.580 "driver_specific": { 00:22:34.580 "raid": { 00:22:34.580 "uuid": "20bbe05a-0df3-4032-bc00-70071f6a8fe3", 00:22:34.580 "strip_size_kb": 64, 00:22:34.580 "state": "online", 00:22:34.580 "raid_level": "raid0", 00:22:34.580 "superblock": false, 00:22:34.580 "num_base_bdevs": 4, 00:22:34.580 "num_base_bdevs_discovered": 4, 00:22:34.580 "num_base_bdevs_operational": 4, 00:22:34.580 "base_bdevs_list": [ 00:22:34.580 { 00:22:34.580 "name": "NewBaseBdev", 00:22:34.580 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:34.580 "is_configured": true, 00:22:34.580 "data_offset": 0, 00:22:34.580 "data_size": 65536 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "name": "BaseBdev2", 00:22:34.580 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:34.580 "is_configured": true, 00:22:34.580 "data_offset": 0, 00:22:34.580 "data_size": 65536 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "name": "BaseBdev3", 00:22:34.580 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:34.580 "is_configured": true, 00:22:34.580 "data_offset": 0, 00:22:34.580 "data_size": 65536 00:22:34.580 }, 00:22:34.580 { 00:22:34.580 "name": "BaseBdev4", 00:22:34.580 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:34.580 "is_configured": true, 00:22:34.580 "data_offset": 0, 00:22:34.580 "data_size": 65536 00:22:34.580 } 00:22:34.580 ] 00:22:34.580 } 00:22:34.580 } 00:22:34.580 }' 00:22:34.580 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:34.580 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:34.580 BaseBdev2 00:22:34.580 BaseBdev3 00:22:34.580 BaseBdev4' 00:22:34.580 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.580 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:34.580 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.839 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.839 "name": "NewBaseBdev", 00:22:34.839 "aliases": [ 00:22:34.839 "dd36333e-a485-4117-8d01-9c2a6b600daa" 00:22:34.839 ], 00:22:34.839 "product_name": "Malloc disk", 00:22:34.839 "block_size": 512, 00:22:34.839 "num_blocks": 65536, 00:22:34.839 "uuid": "dd36333e-a485-4117-8d01-9c2a6b600daa", 00:22:34.839 "assigned_rate_limits": { 00:22:34.839 "rw_ios_per_sec": 0, 00:22:34.839 "rw_mbytes_per_sec": 0, 00:22:34.839 "r_mbytes_per_sec": 0, 00:22:34.839 "w_mbytes_per_sec": 0 00:22:34.839 }, 00:22:34.839 "claimed": true, 00:22:34.839 "claim_type": "exclusive_write", 00:22:34.839 "zoned": false, 00:22:34.839 "supported_io_types": { 00:22:34.839 "read": true, 00:22:34.839 "write": true, 00:22:34.839 "unmap": true, 00:22:34.839 "flush": true, 00:22:34.839 "reset": true, 00:22:34.839 "nvme_admin": false, 00:22:34.839 "nvme_io": false, 00:22:34.839 "nvme_io_md": false, 00:22:34.839 "write_zeroes": true, 00:22:34.839 "zcopy": true, 00:22:34.839 "get_zone_info": false, 00:22:34.839 "zone_management": false, 00:22:34.839 "zone_append": false, 00:22:34.839 "compare": false, 00:22:34.839 "compare_and_write": false, 00:22:34.839 "abort": true, 00:22:34.839 "seek_hole": false, 00:22:34.839 "seek_data": false, 00:22:34.839 "copy": true, 00:22:34.839 "nvme_iov_md": false 00:22:34.839 }, 00:22:34.839 "memory_domains": [ 00:22:34.839 { 00:22:34.839 "dma_device_id": "system", 00:22:34.839 "dma_device_type": 1 00:22:34.839 }, 00:22:34.839 { 00:22:34.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.839 "dma_device_type": 2 00:22:34.839 } 00:22:34.839 ], 00:22:34.839 "driver_specific": {} 00:22:34.839 }' 00:22:34.839 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.097 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.356 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.356 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.356 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.356 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:35.356 18:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.614 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.614 "name": "BaseBdev2", 00:22:35.614 "aliases": [ 00:22:35.614 "2cd5de17-290f-47e3-bf91-d3cb2260eff0" 00:22:35.614 ], 00:22:35.614 "product_name": "Malloc disk", 00:22:35.614 "block_size": 512, 00:22:35.614 "num_blocks": 65536, 00:22:35.614 "uuid": "2cd5de17-290f-47e3-bf91-d3cb2260eff0", 00:22:35.614 "assigned_rate_limits": { 00:22:35.614 "rw_ios_per_sec": 0, 00:22:35.614 "rw_mbytes_per_sec": 0, 00:22:35.614 "r_mbytes_per_sec": 0, 00:22:35.614 "w_mbytes_per_sec": 0 00:22:35.614 }, 00:22:35.614 "claimed": true, 00:22:35.614 "claim_type": "exclusive_write", 00:22:35.614 "zoned": false, 00:22:35.614 "supported_io_types": { 00:22:35.614 "read": true, 00:22:35.614 "write": true, 00:22:35.614 "unmap": true, 00:22:35.614 "flush": true, 00:22:35.614 "reset": true, 00:22:35.614 "nvme_admin": false, 00:22:35.614 "nvme_io": false, 00:22:35.614 "nvme_io_md": false, 00:22:35.614 "write_zeroes": true, 00:22:35.614 "zcopy": true, 00:22:35.614 "get_zone_info": false, 00:22:35.614 "zone_management": false, 00:22:35.614 "zone_append": false, 00:22:35.614 "compare": false, 00:22:35.614 "compare_and_write": false, 00:22:35.614 "abort": true, 00:22:35.614 "seek_hole": false, 00:22:35.614 "seek_data": false, 00:22:35.614 "copy": true, 00:22:35.614 "nvme_iov_md": false 00:22:35.614 }, 00:22:35.614 "memory_domains": [ 00:22:35.614 { 00:22:35.614 "dma_device_id": "system", 00:22:35.614 "dma_device_type": 1 00:22:35.614 }, 00:22:35.614 { 00:22:35.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.614 "dma_device_type": 2 00:22:35.614 } 00:22:35.614 ], 00:22:35.614 "driver_specific": {} 00:22:35.614 }' 00:22:35.614 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.614 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.614 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.614 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.614 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.872 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:35.873 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:36.131 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:36.131 "name": "BaseBdev3", 00:22:36.131 "aliases": [ 00:22:36.131 "2cefa589-9a07-4602-be17-c93986f25ffa" 00:22:36.131 ], 00:22:36.131 "product_name": "Malloc disk", 00:22:36.131 "block_size": 512, 00:22:36.131 "num_blocks": 65536, 00:22:36.131 "uuid": "2cefa589-9a07-4602-be17-c93986f25ffa", 00:22:36.131 "assigned_rate_limits": { 00:22:36.131 "rw_ios_per_sec": 0, 00:22:36.131 "rw_mbytes_per_sec": 0, 00:22:36.131 "r_mbytes_per_sec": 0, 00:22:36.131 "w_mbytes_per_sec": 0 00:22:36.131 }, 00:22:36.131 "claimed": true, 00:22:36.131 "claim_type": "exclusive_write", 00:22:36.131 "zoned": false, 00:22:36.131 "supported_io_types": { 00:22:36.131 "read": true, 00:22:36.131 "write": true, 00:22:36.131 "unmap": true, 00:22:36.131 "flush": true, 00:22:36.131 "reset": true, 00:22:36.131 "nvme_admin": false, 00:22:36.131 "nvme_io": false, 00:22:36.131 "nvme_io_md": false, 00:22:36.131 "write_zeroes": true, 00:22:36.131 "zcopy": true, 00:22:36.131 "get_zone_info": false, 00:22:36.131 "zone_management": false, 00:22:36.131 "zone_append": false, 00:22:36.131 "compare": false, 00:22:36.131 "compare_and_write": false, 00:22:36.131 "abort": true, 00:22:36.131 "seek_hole": false, 00:22:36.131 "seek_data": false, 00:22:36.131 "copy": true, 00:22:36.131 "nvme_iov_md": false 00:22:36.131 }, 00:22:36.131 "memory_domains": [ 00:22:36.131 { 00:22:36.131 "dma_device_id": "system", 00:22:36.131 "dma_device_type": 1 00:22:36.131 }, 00:22:36.131 { 00:22:36.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.131 "dma_device_type": 2 00:22:36.131 } 00:22:36.131 ], 00:22:36.131 "driver_specific": {} 00:22:36.131 }' 00:22:36.131 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.389 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.389 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:36.389 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.389 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.389 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:36.389 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.389 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.650 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:36.650 18:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.650 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.650 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:36.650 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:36.650 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:36.650 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:36.909 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:36.909 "name": "BaseBdev4", 00:22:36.909 "aliases": [ 00:22:36.909 "11e44046-41a5-408d-ad77-867ef2bedbb7" 00:22:36.909 ], 00:22:36.909 "product_name": "Malloc disk", 00:22:36.909 "block_size": 512, 00:22:36.909 "num_blocks": 65536, 00:22:36.909 "uuid": "11e44046-41a5-408d-ad77-867ef2bedbb7", 00:22:36.909 "assigned_rate_limits": { 00:22:36.909 "rw_ios_per_sec": 0, 00:22:36.909 "rw_mbytes_per_sec": 0, 00:22:36.909 "r_mbytes_per_sec": 0, 00:22:36.909 "w_mbytes_per_sec": 0 00:22:36.909 }, 00:22:36.909 "claimed": true, 00:22:36.909 "claim_type": "exclusive_write", 00:22:36.909 "zoned": false, 00:22:36.909 "supported_io_types": { 00:22:36.909 "read": true, 00:22:36.909 "write": true, 00:22:36.909 "unmap": true, 00:22:36.909 "flush": true, 00:22:36.909 "reset": true, 00:22:36.909 "nvme_admin": false, 00:22:36.909 "nvme_io": false, 00:22:36.909 "nvme_io_md": false, 00:22:36.909 "write_zeroes": true, 00:22:36.909 "zcopy": true, 00:22:36.909 "get_zone_info": false, 00:22:36.909 "zone_management": false, 00:22:36.909 "zone_append": false, 00:22:36.909 "compare": false, 00:22:36.909 "compare_and_write": false, 00:22:36.909 "abort": true, 00:22:36.909 "seek_hole": false, 00:22:36.909 "seek_data": false, 00:22:36.909 "copy": true, 00:22:36.909 "nvme_iov_md": false 00:22:36.909 }, 00:22:36.909 "memory_domains": [ 00:22:36.909 { 00:22:36.909 "dma_device_id": "system", 00:22:36.909 "dma_device_type": 1 00:22:36.909 }, 00:22:36.909 { 00:22:36.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.909 "dma_device_type": 2 00:22:36.909 } 00:22:36.909 ], 00:22:36.909 "driver_specific": {} 00:22:36.909 }' 00:22:36.909 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.909 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.909 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:36.909 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.909 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:37.166 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:37.166 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:37.166 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:37.167 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:37.167 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:37.167 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:37.167 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:37.167 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:37.424 [2024-07-25 18:49:37.930826] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:37.424 [2024-07-25 18:49:37.931013] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.424 [2024-07-25 18:49:37.931226] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.424 [2024-07-25 18:49:37.931421] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.424 [2024-07-25 18:49:37.931515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:22:37.424 18:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 133688 00:22:37.424 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 133688 ']' 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 133688 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 133688 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 133688' 00:22:37.425 killing process with pid 133688 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 133688 00:22:37.425 [2024-07-25 18:49:37.979995] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:37.425 18:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 133688 00:22:37.991 [2024-07-25 18:49:38.322698] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:39.375 00:22:39.375 real 0m31.459s 00:22:39.375 user 0m56.212s 00:22:39.375 sys 0m5.208s 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.375 ************************************ 00:22:39.375 END TEST raid_state_function_test 00:22:39.375 ************************************ 00:22:39.375 18:49:39 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:39.375 18:49:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:39.375 18:49:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.375 18:49:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.375 ************************************ 00:22:39.375 START TEST raid_state_function_test_sb 00:22:39.375 ************************************ 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=134771 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134771' 00:22:39.375 Process raid pid: 134771 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 134771 /var/tmp/spdk-raid.sock 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 134771 ']' 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:39.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.375 18:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.375 [2024-07-25 18:49:39.685311] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:39.375 [2024-07-25 18:49:39.685715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.375 [2024-07-25 18:49:39.876043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.633 [2024-07-25 18:49:40.142763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.891 [2024-07-25 18:49:40.334325] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.149 18:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.149 18:49:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:22:40.149 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:40.408 [2024-07-25 18:49:40.802909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:40.408 [2024-07-25 18:49:40.803174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:40.408 [2024-07-25 18:49:40.803281] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:40.408 [2024-07-25 18:49:40.803337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:40.408 [2024-07-25 18:49:40.803406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:40.408 [2024-07-25 18:49:40.803454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:40.408 [2024-07-25 18:49:40.803525] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:40.408 [2024-07-25 18:49:40.803580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.408 18:49:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.666 18:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.666 "name": "Existed_Raid", 00:22:40.666 "uuid": "fa2a5f3f-91a8-46e9-8d19-a516e2f1fbd4", 00:22:40.666 "strip_size_kb": 64, 00:22:40.666 "state": "configuring", 00:22:40.666 "raid_level": "raid0", 00:22:40.666 "superblock": true, 00:22:40.666 "num_base_bdevs": 4, 00:22:40.666 "num_base_bdevs_discovered": 0, 00:22:40.666 "num_base_bdevs_operational": 4, 00:22:40.666 "base_bdevs_list": [ 00:22:40.666 { 00:22:40.666 "name": "BaseBdev1", 00:22:40.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.666 "is_configured": false, 00:22:40.666 "data_offset": 0, 00:22:40.666 "data_size": 0 00:22:40.666 }, 00:22:40.666 { 00:22:40.666 "name": "BaseBdev2", 00:22:40.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.666 "is_configured": false, 00:22:40.666 "data_offset": 0, 00:22:40.666 "data_size": 0 00:22:40.666 }, 00:22:40.666 { 00:22:40.666 "name": "BaseBdev3", 00:22:40.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.666 "is_configured": false, 00:22:40.666 "data_offset": 0, 00:22:40.666 "data_size": 0 00:22:40.666 }, 00:22:40.666 { 00:22:40.666 "name": "BaseBdev4", 00:22:40.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.667 "is_configured": false, 00:22:40.667 "data_offset": 0, 00:22:40.667 "data_size": 0 00:22:40.667 } 00:22:40.667 ] 00:22:40.667 }' 00:22:40.667 18:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.667 18:49:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.231 18:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:41.232 [2024-07-25 18:49:41.730962] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:41.232 [2024-07-25 18:49:41.731190] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:22:41.232 18:49:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:41.489 [2024-07-25 18:49:41.991071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:41.489 [2024-07-25 18:49:41.991324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:41.489 [2024-07-25 18:49:41.991433] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:41.489 [2024-07-25 18:49:41.991517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:41.489 [2024-07-25 18:49:41.991636] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:41.489 [2024-07-25 18:49:41.991706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:41.489 [2024-07-25 18:49:41.991788] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:41.489 [2024-07-25 18:49:41.991840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:41.489 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:41.747 [2024-07-25 18:49:42.209824] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.747 BaseBdev1 00:22:41.747 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:41.747 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:41.747 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:41.747 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:41.747 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:41.747 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:41.747 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:42.005 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:42.005 [ 00:22:42.005 { 00:22:42.005 "name": "BaseBdev1", 00:22:42.005 "aliases": [ 00:22:42.005 "92b70088-8841-4404-a4b8-69bdfe6ac47e" 00:22:42.005 ], 00:22:42.005 "product_name": "Malloc disk", 00:22:42.005 "block_size": 512, 00:22:42.005 "num_blocks": 65536, 00:22:42.005 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:42.005 "assigned_rate_limits": { 00:22:42.005 "rw_ios_per_sec": 0, 00:22:42.005 "rw_mbytes_per_sec": 0, 00:22:42.005 "r_mbytes_per_sec": 0, 00:22:42.005 "w_mbytes_per_sec": 0 00:22:42.005 }, 00:22:42.005 "claimed": true, 00:22:42.005 "claim_type": "exclusive_write", 00:22:42.005 "zoned": false, 00:22:42.005 "supported_io_types": { 00:22:42.005 "read": true, 00:22:42.005 "write": true, 00:22:42.005 "unmap": true, 00:22:42.005 "flush": true, 00:22:42.005 "reset": true, 00:22:42.005 "nvme_admin": false, 00:22:42.005 "nvme_io": false, 00:22:42.005 "nvme_io_md": false, 00:22:42.005 "write_zeroes": true, 00:22:42.005 "zcopy": true, 00:22:42.005 "get_zone_info": false, 00:22:42.005 "zone_management": false, 00:22:42.005 "zone_append": false, 00:22:42.005 "compare": false, 00:22:42.005 "compare_and_write": false, 00:22:42.005 "abort": true, 00:22:42.005 "seek_hole": false, 00:22:42.005 "seek_data": false, 00:22:42.005 "copy": true, 00:22:42.005 "nvme_iov_md": false 00:22:42.005 }, 00:22:42.005 "memory_domains": [ 00:22:42.005 { 00:22:42.005 "dma_device_id": "system", 00:22:42.005 "dma_device_type": 1 00:22:42.005 }, 00:22:42.005 { 00:22:42.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.005 "dma_device_type": 2 00:22:42.005 } 00:22:42.005 ], 00:22:42.005 "driver_specific": {} 00:22:42.005 } 00:22:42.005 ] 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.263 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.263 "name": "Existed_Raid", 00:22:42.263 "uuid": "7c66d94a-01fe-4d1c-8401-feddbbb2a918", 00:22:42.263 "strip_size_kb": 64, 00:22:42.263 "state": "configuring", 00:22:42.263 "raid_level": "raid0", 00:22:42.263 "superblock": true, 00:22:42.263 "num_base_bdevs": 4, 00:22:42.263 "num_base_bdevs_discovered": 1, 00:22:42.263 "num_base_bdevs_operational": 4, 00:22:42.263 "base_bdevs_list": [ 00:22:42.263 { 00:22:42.263 "name": "BaseBdev1", 00:22:42.263 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:42.264 "is_configured": true, 00:22:42.264 "data_offset": 2048, 00:22:42.264 "data_size": 63488 00:22:42.264 }, 00:22:42.264 { 00:22:42.264 "name": "BaseBdev2", 00:22:42.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.264 "is_configured": false, 00:22:42.264 "data_offset": 0, 00:22:42.264 "data_size": 0 00:22:42.264 }, 00:22:42.264 { 00:22:42.264 "name": "BaseBdev3", 00:22:42.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.264 "is_configured": false, 00:22:42.264 "data_offset": 0, 00:22:42.264 "data_size": 0 00:22:42.264 }, 00:22:42.264 { 00:22:42.264 "name": "BaseBdev4", 00:22:42.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.264 "is_configured": false, 00:22:42.264 "data_offset": 0, 00:22:42.264 "data_size": 0 00:22:42.264 } 00:22:42.264 ] 00:22:42.264 }' 00:22:42.264 18:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.264 18:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.847 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:43.165 [2024-07-25 18:49:43.470146] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:43.165 [2024-07-25 18:49:43.470363] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:22:43.165 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:43.165 [2024-07-25 18:49:43.730450] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.165 [2024-07-25 18:49:43.732954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:43.165 [2024-07-25 18:49:43.733140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:43.165 [2024-07-25 18:49:43.733241] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:43.165 [2024-07-25 18:49:43.733304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:43.165 [2024-07-25 18:49:43.733376] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:43.165 [2024-07-25 18:49:43.733422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.435 "name": "Existed_Raid", 00:22:43.435 "uuid": "eec6024d-dce3-4ffe-be11-a8273771f00a", 00:22:43.435 "strip_size_kb": 64, 00:22:43.435 "state": "configuring", 00:22:43.435 "raid_level": "raid0", 00:22:43.435 "superblock": true, 00:22:43.435 "num_base_bdevs": 4, 00:22:43.435 "num_base_bdevs_discovered": 1, 00:22:43.435 "num_base_bdevs_operational": 4, 00:22:43.435 "base_bdevs_list": [ 00:22:43.435 { 00:22:43.435 "name": "BaseBdev1", 00:22:43.435 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:43.435 "is_configured": true, 00:22:43.435 "data_offset": 2048, 00:22:43.435 "data_size": 63488 00:22:43.435 }, 00:22:43.435 { 00:22:43.435 "name": "BaseBdev2", 00:22:43.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.435 "is_configured": false, 00:22:43.435 "data_offset": 0, 00:22:43.435 "data_size": 0 00:22:43.435 }, 00:22:43.435 { 00:22:43.435 "name": "BaseBdev3", 00:22:43.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.435 "is_configured": false, 00:22:43.435 "data_offset": 0, 00:22:43.435 "data_size": 0 00:22:43.435 }, 00:22:43.435 { 00:22:43.435 "name": "BaseBdev4", 00:22:43.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.435 "is_configured": false, 00:22:43.435 "data_offset": 0, 00:22:43.435 "data_size": 0 00:22:43.435 } 00:22:43.435 ] 00:22:43.435 }' 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.435 18:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.002 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:44.261 [2024-07-25 18:49:44.696127] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.261 BaseBdev2 00:22:44.261 18:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:44.261 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:44.261 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:44.261 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:44.261 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:44.261 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:44.261 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.519 18:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:44.777 [ 00:22:44.777 { 00:22:44.777 "name": "BaseBdev2", 00:22:44.777 "aliases": [ 00:22:44.777 "1b45c58a-6383-49c5-87d5-f03ade13e1a4" 00:22:44.777 ], 00:22:44.777 "product_name": "Malloc disk", 00:22:44.777 "block_size": 512, 00:22:44.777 "num_blocks": 65536, 00:22:44.777 "uuid": "1b45c58a-6383-49c5-87d5-f03ade13e1a4", 00:22:44.777 "assigned_rate_limits": { 00:22:44.777 "rw_ios_per_sec": 0, 00:22:44.777 "rw_mbytes_per_sec": 0, 00:22:44.777 "r_mbytes_per_sec": 0, 00:22:44.777 "w_mbytes_per_sec": 0 00:22:44.777 }, 00:22:44.777 "claimed": true, 00:22:44.777 "claim_type": "exclusive_write", 00:22:44.777 "zoned": false, 00:22:44.777 "supported_io_types": { 00:22:44.777 "read": true, 00:22:44.777 "write": true, 00:22:44.777 "unmap": true, 00:22:44.777 "flush": true, 00:22:44.777 "reset": true, 00:22:44.777 "nvme_admin": false, 00:22:44.777 "nvme_io": false, 00:22:44.777 "nvme_io_md": false, 00:22:44.777 "write_zeroes": true, 00:22:44.777 "zcopy": true, 00:22:44.777 "get_zone_info": false, 00:22:44.777 "zone_management": false, 00:22:44.777 "zone_append": false, 00:22:44.777 "compare": false, 00:22:44.777 "compare_and_write": false, 00:22:44.777 "abort": true, 00:22:44.777 "seek_hole": false, 00:22:44.777 "seek_data": false, 00:22:44.777 "copy": true, 00:22:44.777 "nvme_iov_md": false 00:22:44.777 }, 00:22:44.777 "memory_domains": [ 00:22:44.777 { 00:22:44.777 "dma_device_id": "system", 00:22:44.777 "dma_device_type": 1 00:22:44.777 }, 00:22:44.777 { 00:22:44.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.777 "dma_device_type": 2 00:22:44.777 } 00:22:44.777 ], 00:22:44.777 "driver_specific": {} 00:22:44.777 } 00:22:44.777 ] 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.777 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.035 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.035 "name": "Existed_Raid", 00:22:45.035 "uuid": "eec6024d-dce3-4ffe-be11-a8273771f00a", 00:22:45.035 "strip_size_kb": 64, 00:22:45.035 "state": "configuring", 00:22:45.035 "raid_level": "raid0", 00:22:45.035 "superblock": true, 00:22:45.035 "num_base_bdevs": 4, 00:22:45.035 "num_base_bdevs_discovered": 2, 00:22:45.035 "num_base_bdevs_operational": 4, 00:22:45.035 "base_bdevs_list": [ 00:22:45.035 { 00:22:45.035 "name": "BaseBdev1", 00:22:45.035 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:45.035 "is_configured": true, 00:22:45.035 "data_offset": 2048, 00:22:45.035 "data_size": 63488 00:22:45.035 }, 00:22:45.035 { 00:22:45.035 "name": "BaseBdev2", 00:22:45.035 "uuid": "1b45c58a-6383-49c5-87d5-f03ade13e1a4", 00:22:45.035 "is_configured": true, 00:22:45.035 "data_offset": 2048, 00:22:45.035 "data_size": 63488 00:22:45.035 }, 00:22:45.035 { 00:22:45.035 "name": "BaseBdev3", 00:22:45.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.035 "is_configured": false, 00:22:45.035 "data_offset": 0, 00:22:45.035 "data_size": 0 00:22:45.035 }, 00:22:45.035 { 00:22:45.035 "name": "BaseBdev4", 00:22:45.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.035 "is_configured": false, 00:22:45.035 "data_offset": 0, 00:22:45.035 "data_size": 0 00:22:45.035 } 00:22:45.035 ] 00:22:45.035 }' 00:22:45.035 18:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.036 18:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.602 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:45.860 [2024-07-25 18:49:46.416797] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:45.860 BaseBdev3 00:22:45.860 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:45.860 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:45.860 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:45.860 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:45.860 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:45.860 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:46.118 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.118 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:46.376 [ 00:22:46.376 { 00:22:46.376 "name": "BaseBdev3", 00:22:46.376 "aliases": [ 00:22:46.376 "2aa3b080-db3f-48d4-9590-da4bf553a90c" 00:22:46.376 ], 00:22:46.376 "product_name": "Malloc disk", 00:22:46.376 "block_size": 512, 00:22:46.376 "num_blocks": 65536, 00:22:46.376 "uuid": "2aa3b080-db3f-48d4-9590-da4bf553a90c", 00:22:46.376 "assigned_rate_limits": { 00:22:46.376 "rw_ios_per_sec": 0, 00:22:46.376 "rw_mbytes_per_sec": 0, 00:22:46.376 "r_mbytes_per_sec": 0, 00:22:46.376 "w_mbytes_per_sec": 0 00:22:46.376 }, 00:22:46.376 "claimed": true, 00:22:46.376 "claim_type": "exclusive_write", 00:22:46.376 "zoned": false, 00:22:46.376 "supported_io_types": { 00:22:46.376 "read": true, 00:22:46.376 "write": true, 00:22:46.376 "unmap": true, 00:22:46.376 "flush": true, 00:22:46.376 "reset": true, 00:22:46.376 "nvme_admin": false, 00:22:46.376 "nvme_io": false, 00:22:46.376 "nvme_io_md": false, 00:22:46.376 "write_zeroes": true, 00:22:46.376 "zcopy": true, 00:22:46.376 "get_zone_info": false, 00:22:46.376 "zone_management": false, 00:22:46.376 "zone_append": false, 00:22:46.376 "compare": false, 00:22:46.376 "compare_and_write": false, 00:22:46.376 "abort": true, 00:22:46.376 "seek_hole": false, 00:22:46.377 "seek_data": false, 00:22:46.377 "copy": true, 00:22:46.377 "nvme_iov_md": false 00:22:46.377 }, 00:22:46.377 "memory_domains": [ 00:22:46.377 { 00:22:46.377 "dma_device_id": "system", 00:22:46.377 "dma_device_type": 1 00:22:46.377 }, 00:22:46.377 { 00:22:46.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.377 "dma_device_type": 2 00:22:46.377 } 00:22:46.377 ], 00:22:46.377 "driver_specific": {} 00:22:46.377 } 00:22:46.377 ] 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.377 18:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.635 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.635 "name": "Existed_Raid", 00:22:46.635 "uuid": "eec6024d-dce3-4ffe-be11-a8273771f00a", 00:22:46.635 "strip_size_kb": 64, 00:22:46.635 "state": "configuring", 00:22:46.635 "raid_level": "raid0", 00:22:46.635 "superblock": true, 00:22:46.635 "num_base_bdevs": 4, 00:22:46.635 "num_base_bdevs_discovered": 3, 00:22:46.635 "num_base_bdevs_operational": 4, 00:22:46.635 "base_bdevs_list": [ 00:22:46.635 { 00:22:46.635 "name": "BaseBdev1", 00:22:46.635 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:46.635 "is_configured": true, 00:22:46.635 "data_offset": 2048, 00:22:46.635 "data_size": 63488 00:22:46.635 }, 00:22:46.635 { 00:22:46.635 "name": "BaseBdev2", 00:22:46.635 "uuid": "1b45c58a-6383-49c5-87d5-f03ade13e1a4", 00:22:46.635 "is_configured": true, 00:22:46.635 "data_offset": 2048, 00:22:46.635 "data_size": 63488 00:22:46.635 }, 00:22:46.635 { 00:22:46.635 "name": "BaseBdev3", 00:22:46.635 "uuid": "2aa3b080-db3f-48d4-9590-da4bf553a90c", 00:22:46.635 "is_configured": true, 00:22:46.635 "data_offset": 2048, 00:22:46.635 "data_size": 63488 00:22:46.635 }, 00:22:46.635 { 00:22:46.635 "name": "BaseBdev4", 00:22:46.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.635 "is_configured": false, 00:22:46.635 "data_offset": 0, 00:22:46.635 "data_size": 0 00:22:46.635 } 00:22:46.635 ] 00:22:46.635 }' 00:22:46.635 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.635 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.202 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:47.461 [2024-07-25 18:49:47.901655] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:47.461 [2024-07-25 18:49:47.902240] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:22:47.461 [2024-07-25 18:49:47.902358] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:47.461 [2024-07-25 18:49:47.902532] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:47.461 [2024-07-25 18:49:47.902932] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:22:47.461 [2024-07-25 18:49:47.902973] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:22:47.461 [2024-07-25 18:49:47.903299] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.461 BaseBdev4 00:22:47.461 18:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:47.461 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:47.461 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:47.461 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:47.461 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:47.461 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:47.461 18:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.720 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:47.978 [ 00:22:47.978 { 00:22:47.978 "name": "BaseBdev4", 00:22:47.978 "aliases": [ 00:22:47.978 "b051ad73-bbd9-4b37-80dd-146d0b02ec1a" 00:22:47.978 ], 00:22:47.978 "product_name": "Malloc disk", 00:22:47.978 "block_size": 512, 00:22:47.978 "num_blocks": 65536, 00:22:47.978 "uuid": "b051ad73-bbd9-4b37-80dd-146d0b02ec1a", 00:22:47.978 "assigned_rate_limits": { 00:22:47.978 "rw_ios_per_sec": 0, 00:22:47.978 "rw_mbytes_per_sec": 0, 00:22:47.978 "r_mbytes_per_sec": 0, 00:22:47.978 "w_mbytes_per_sec": 0 00:22:47.978 }, 00:22:47.978 "claimed": true, 00:22:47.978 "claim_type": "exclusive_write", 00:22:47.978 "zoned": false, 00:22:47.978 "supported_io_types": { 00:22:47.978 "read": true, 00:22:47.978 "write": true, 00:22:47.978 "unmap": true, 00:22:47.978 "flush": true, 00:22:47.978 "reset": true, 00:22:47.978 "nvme_admin": false, 00:22:47.978 "nvme_io": false, 00:22:47.978 "nvme_io_md": false, 00:22:47.978 "write_zeroes": true, 00:22:47.978 "zcopy": true, 00:22:47.978 "get_zone_info": false, 00:22:47.978 "zone_management": false, 00:22:47.978 "zone_append": false, 00:22:47.978 "compare": false, 00:22:47.978 "compare_and_write": false, 00:22:47.978 "abort": true, 00:22:47.978 "seek_hole": false, 00:22:47.978 "seek_data": false, 00:22:47.978 "copy": true, 00:22:47.978 "nvme_iov_md": false 00:22:47.978 }, 00:22:47.978 "memory_domains": [ 00:22:47.978 { 00:22:47.978 "dma_device_id": "system", 00:22:47.978 "dma_device_type": 1 00:22:47.978 }, 00:22:47.978 { 00:22:47.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.978 "dma_device_type": 2 00:22:47.978 } 00:22:47.978 ], 00:22:47.978 "driver_specific": {} 00:22:47.978 } 00:22:47.978 ] 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.978 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.979 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.979 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.979 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.237 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.237 "name": "Existed_Raid", 00:22:48.237 "uuid": "eec6024d-dce3-4ffe-be11-a8273771f00a", 00:22:48.237 "strip_size_kb": 64, 00:22:48.237 "state": "online", 00:22:48.237 "raid_level": "raid0", 00:22:48.237 "superblock": true, 00:22:48.237 "num_base_bdevs": 4, 00:22:48.237 "num_base_bdevs_discovered": 4, 00:22:48.237 "num_base_bdevs_operational": 4, 00:22:48.237 "base_bdevs_list": [ 00:22:48.237 { 00:22:48.237 "name": "BaseBdev1", 00:22:48.237 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:48.237 "is_configured": true, 00:22:48.237 "data_offset": 2048, 00:22:48.237 "data_size": 63488 00:22:48.237 }, 00:22:48.237 { 00:22:48.237 "name": "BaseBdev2", 00:22:48.237 "uuid": "1b45c58a-6383-49c5-87d5-f03ade13e1a4", 00:22:48.237 "is_configured": true, 00:22:48.237 "data_offset": 2048, 00:22:48.237 "data_size": 63488 00:22:48.237 }, 00:22:48.237 { 00:22:48.237 "name": "BaseBdev3", 00:22:48.237 "uuid": "2aa3b080-db3f-48d4-9590-da4bf553a90c", 00:22:48.237 "is_configured": true, 00:22:48.237 "data_offset": 2048, 00:22:48.237 "data_size": 63488 00:22:48.237 }, 00:22:48.237 { 00:22:48.237 "name": "BaseBdev4", 00:22:48.237 "uuid": "b051ad73-bbd9-4b37-80dd-146d0b02ec1a", 00:22:48.237 "is_configured": true, 00:22:48.237 "data_offset": 2048, 00:22:48.237 "data_size": 63488 00:22:48.237 } 00:22:48.237 ] 00:22:48.237 }' 00:22:48.237 18:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.237 18:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:48.805 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:49.064 [2024-07-25 18:49:49.454278] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.064 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:49.064 "name": "Existed_Raid", 00:22:49.064 "aliases": [ 00:22:49.064 "eec6024d-dce3-4ffe-be11-a8273771f00a" 00:22:49.064 ], 00:22:49.064 "product_name": "Raid Volume", 00:22:49.064 "block_size": 512, 00:22:49.064 "num_blocks": 253952, 00:22:49.064 "uuid": "eec6024d-dce3-4ffe-be11-a8273771f00a", 00:22:49.064 "assigned_rate_limits": { 00:22:49.064 "rw_ios_per_sec": 0, 00:22:49.064 "rw_mbytes_per_sec": 0, 00:22:49.064 "r_mbytes_per_sec": 0, 00:22:49.064 "w_mbytes_per_sec": 0 00:22:49.064 }, 00:22:49.064 "claimed": false, 00:22:49.064 "zoned": false, 00:22:49.064 "supported_io_types": { 00:22:49.064 "read": true, 00:22:49.064 "write": true, 00:22:49.064 "unmap": true, 00:22:49.064 "flush": true, 00:22:49.064 "reset": true, 00:22:49.064 "nvme_admin": false, 00:22:49.064 "nvme_io": false, 00:22:49.064 "nvme_io_md": false, 00:22:49.064 "write_zeroes": true, 00:22:49.064 "zcopy": false, 00:22:49.064 "get_zone_info": false, 00:22:49.064 "zone_management": false, 00:22:49.064 "zone_append": false, 00:22:49.064 "compare": false, 00:22:49.064 "compare_and_write": false, 00:22:49.064 "abort": false, 00:22:49.064 "seek_hole": false, 00:22:49.064 "seek_data": false, 00:22:49.064 "copy": false, 00:22:49.064 "nvme_iov_md": false 00:22:49.064 }, 00:22:49.064 "memory_domains": [ 00:22:49.064 { 00:22:49.064 "dma_device_id": "system", 00:22:49.064 "dma_device_type": 1 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.064 "dma_device_type": 2 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "dma_device_id": "system", 00:22:49.064 "dma_device_type": 1 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.064 "dma_device_type": 2 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "dma_device_id": "system", 00:22:49.064 "dma_device_type": 1 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.064 "dma_device_type": 2 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "dma_device_id": "system", 00:22:49.064 "dma_device_type": 1 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.064 "dma_device_type": 2 00:22:49.064 } 00:22:49.064 ], 00:22:49.064 "driver_specific": { 00:22:49.064 "raid": { 00:22:49.064 "uuid": "eec6024d-dce3-4ffe-be11-a8273771f00a", 00:22:49.064 "strip_size_kb": 64, 00:22:49.064 "state": "online", 00:22:49.064 "raid_level": "raid0", 00:22:49.064 "superblock": true, 00:22:49.064 "num_base_bdevs": 4, 00:22:49.064 "num_base_bdevs_discovered": 4, 00:22:49.064 "num_base_bdevs_operational": 4, 00:22:49.064 "base_bdevs_list": [ 00:22:49.064 { 00:22:49.064 "name": "BaseBdev1", 00:22:49.064 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:49.064 "is_configured": true, 00:22:49.064 "data_offset": 2048, 00:22:49.064 "data_size": 63488 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "name": "BaseBdev2", 00:22:49.064 "uuid": "1b45c58a-6383-49c5-87d5-f03ade13e1a4", 00:22:49.064 "is_configured": true, 00:22:49.064 "data_offset": 2048, 00:22:49.064 "data_size": 63488 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "name": "BaseBdev3", 00:22:49.064 "uuid": "2aa3b080-db3f-48d4-9590-da4bf553a90c", 00:22:49.064 "is_configured": true, 00:22:49.064 "data_offset": 2048, 00:22:49.064 "data_size": 63488 00:22:49.064 }, 00:22:49.064 { 00:22:49.064 "name": "BaseBdev4", 00:22:49.064 "uuid": "b051ad73-bbd9-4b37-80dd-146d0b02ec1a", 00:22:49.064 "is_configured": true, 00:22:49.064 "data_offset": 2048, 00:22:49.064 "data_size": 63488 00:22:49.064 } 00:22:49.064 ] 00:22:49.064 } 00:22:49.064 } 00:22:49.064 }' 00:22:49.064 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:49.064 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:49.064 BaseBdev2 00:22:49.064 BaseBdev3 00:22:49.064 BaseBdev4' 00:22:49.064 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:49.064 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:49.064 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:49.324 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:49.324 "name": "BaseBdev1", 00:22:49.324 "aliases": [ 00:22:49.324 "92b70088-8841-4404-a4b8-69bdfe6ac47e" 00:22:49.324 ], 00:22:49.324 "product_name": "Malloc disk", 00:22:49.324 "block_size": 512, 00:22:49.324 "num_blocks": 65536, 00:22:49.324 "uuid": "92b70088-8841-4404-a4b8-69bdfe6ac47e", 00:22:49.324 "assigned_rate_limits": { 00:22:49.324 "rw_ios_per_sec": 0, 00:22:49.324 "rw_mbytes_per_sec": 0, 00:22:49.324 "r_mbytes_per_sec": 0, 00:22:49.324 "w_mbytes_per_sec": 0 00:22:49.324 }, 00:22:49.324 "claimed": true, 00:22:49.324 "claim_type": "exclusive_write", 00:22:49.324 "zoned": false, 00:22:49.324 "supported_io_types": { 00:22:49.324 "read": true, 00:22:49.324 "write": true, 00:22:49.324 "unmap": true, 00:22:49.324 "flush": true, 00:22:49.324 "reset": true, 00:22:49.324 "nvme_admin": false, 00:22:49.324 "nvme_io": false, 00:22:49.324 "nvme_io_md": false, 00:22:49.324 "write_zeroes": true, 00:22:49.324 "zcopy": true, 00:22:49.324 "get_zone_info": false, 00:22:49.324 "zone_management": false, 00:22:49.324 "zone_append": false, 00:22:49.324 "compare": false, 00:22:49.324 "compare_and_write": false, 00:22:49.324 "abort": true, 00:22:49.324 "seek_hole": false, 00:22:49.324 "seek_data": false, 00:22:49.324 "copy": true, 00:22:49.324 "nvme_iov_md": false 00:22:49.324 }, 00:22:49.324 "memory_domains": [ 00:22:49.324 { 00:22:49.324 "dma_device_id": "system", 00:22:49.324 "dma_device_type": 1 00:22:49.324 }, 00:22:49.324 { 00:22:49.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.324 "dma_device_type": 2 00:22:49.324 } 00:22:49.324 ], 00:22:49.324 "driver_specific": {} 00:22:49.324 }' 00:22:49.324 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:49.324 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:49.324 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:49.324 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:49.582 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:49.582 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:49.582 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:49.582 18:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:49.582 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:49.582 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:49.582 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:49.582 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:49.582 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:49.582 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:49.582 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:49.841 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:49.841 "name": "BaseBdev2", 00:22:49.841 "aliases": [ 00:22:49.841 "1b45c58a-6383-49c5-87d5-f03ade13e1a4" 00:22:49.841 ], 00:22:49.841 "product_name": "Malloc disk", 00:22:49.841 "block_size": 512, 00:22:49.841 "num_blocks": 65536, 00:22:49.841 "uuid": "1b45c58a-6383-49c5-87d5-f03ade13e1a4", 00:22:49.841 "assigned_rate_limits": { 00:22:49.841 "rw_ios_per_sec": 0, 00:22:49.841 "rw_mbytes_per_sec": 0, 00:22:49.841 "r_mbytes_per_sec": 0, 00:22:49.841 "w_mbytes_per_sec": 0 00:22:49.841 }, 00:22:49.841 "claimed": true, 00:22:49.841 "claim_type": "exclusive_write", 00:22:49.841 "zoned": false, 00:22:49.841 "supported_io_types": { 00:22:49.841 "read": true, 00:22:49.841 "write": true, 00:22:49.841 "unmap": true, 00:22:49.841 "flush": true, 00:22:49.841 "reset": true, 00:22:49.841 "nvme_admin": false, 00:22:49.841 "nvme_io": false, 00:22:49.841 "nvme_io_md": false, 00:22:49.841 "write_zeroes": true, 00:22:49.841 "zcopy": true, 00:22:49.841 "get_zone_info": false, 00:22:49.841 "zone_management": false, 00:22:49.841 "zone_append": false, 00:22:49.841 "compare": false, 00:22:49.841 "compare_and_write": false, 00:22:49.841 "abort": true, 00:22:49.841 "seek_hole": false, 00:22:49.841 "seek_data": false, 00:22:49.841 "copy": true, 00:22:49.841 "nvme_iov_md": false 00:22:49.841 }, 00:22:49.841 "memory_domains": [ 00:22:49.841 { 00:22:49.841 "dma_device_id": "system", 00:22:49.841 "dma_device_type": 1 00:22:49.841 }, 00:22:49.841 { 00:22:49.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.841 "dma_device_type": 2 00:22:49.841 } 00:22:49.841 ], 00:22:49.841 "driver_specific": {} 00:22:49.841 }' 00:22:49.841 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:49.841 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:49.841 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:49.841 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:50.102 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:50.361 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:50.361 "name": "BaseBdev3", 00:22:50.361 "aliases": [ 00:22:50.361 "2aa3b080-db3f-48d4-9590-da4bf553a90c" 00:22:50.361 ], 00:22:50.361 "product_name": "Malloc disk", 00:22:50.361 "block_size": 512, 00:22:50.361 "num_blocks": 65536, 00:22:50.361 "uuid": "2aa3b080-db3f-48d4-9590-da4bf553a90c", 00:22:50.361 "assigned_rate_limits": { 00:22:50.361 "rw_ios_per_sec": 0, 00:22:50.361 "rw_mbytes_per_sec": 0, 00:22:50.361 "r_mbytes_per_sec": 0, 00:22:50.361 "w_mbytes_per_sec": 0 00:22:50.361 }, 00:22:50.361 "claimed": true, 00:22:50.361 "claim_type": "exclusive_write", 00:22:50.361 "zoned": false, 00:22:50.361 "supported_io_types": { 00:22:50.361 "read": true, 00:22:50.361 "write": true, 00:22:50.361 "unmap": true, 00:22:50.361 "flush": true, 00:22:50.361 "reset": true, 00:22:50.361 "nvme_admin": false, 00:22:50.361 "nvme_io": false, 00:22:50.361 "nvme_io_md": false, 00:22:50.361 "write_zeroes": true, 00:22:50.361 "zcopy": true, 00:22:50.361 "get_zone_info": false, 00:22:50.361 "zone_management": false, 00:22:50.361 "zone_append": false, 00:22:50.361 "compare": false, 00:22:50.361 "compare_and_write": false, 00:22:50.361 "abort": true, 00:22:50.361 "seek_hole": false, 00:22:50.361 "seek_data": false, 00:22:50.361 "copy": true, 00:22:50.361 "nvme_iov_md": false 00:22:50.361 }, 00:22:50.361 "memory_domains": [ 00:22:50.361 { 00:22:50.361 "dma_device_id": "system", 00:22:50.361 "dma_device_type": 1 00:22:50.361 }, 00:22:50.361 { 00:22:50.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.361 "dma_device_type": 2 00:22:50.361 } 00:22:50.361 ], 00:22:50.361 "driver_specific": {} 00:22:50.361 }' 00:22:50.361 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:50.361 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:50.620 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:50.620 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:50.620 18:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:50.621 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:50.621 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:50.621 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:50.621 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:50.621 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:50.621 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:50.880 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:50.880 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:50.880 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:50.880 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:51.140 "name": "BaseBdev4", 00:22:51.140 "aliases": [ 00:22:51.140 "b051ad73-bbd9-4b37-80dd-146d0b02ec1a" 00:22:51.140 ], 00:22:51.140 "product_name": "Malloc disk", 00:22:51.140 "block_size": 512, 00:22:51.140 "num_blocks": 65536, 00:22:51.140 "uuid": "b051ad73-bbd9-4b37-80dd-146d0b02ec1a", 00:22:51.140 "assigned_rate_limits": { 00:22:51.140 "rw_ios_per_sec": 0, 00:22:51.140 "rw_mbytes_per_sec": 0, 00:22:51.140 "r_mbytes_per_sec": 0, 00:22:51.140 "w_mbytes_per_sec": 0 00:22:51.140 }, 00:22:51.140 "claimed": true, 00:22:51.140 "claim_type": "exclusive_write", 00:22:51.140 "zoned": false, 00:22:51.140 "supported_io_types": { 00:22:51.140 "read": true, 00:22:51.140 "write": true, 00:22:51.140 "unmap": true, 00:22:51.140 "flush": true, 00:22:51.140 "reset": true, 00:22:51.140 "nvme_admin": false, 00:22:51.140 "nvme_io": false, 00:22:51.140 "nvme_io_md": false, 00:22:51.140 "write_zeroes": true, 00:22:51.140 "zcopy": true, 00:22:51.140 "get_zone_info": false, 00:22:51.140 "zone_management": false, 00:22:51.140 "zone_append": false, 00:22:51.140 "compare": false, 00:22:51.140 "compare_and_write": false, 00:22:51.140 "abort": true, 00:22:51.140 "seek_hole": false, 00:22:51.140 "seek_data": false, 00:22:51.140 "copy": true, 00:22:51.140 "nvme_iov_md": false 00:22:51.140 }, 00:22:51.140 "memory_domains": [ 00:22:51.140 { 00:22:51.140 "dma_device_id": "system", 00:22:51.140 "dma_device_type": 1 00:22:51.140 }, 00:22:51.140 { 00:22:51.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.140 "dma_device_type": 2 00:22:51.140 } 00:22:51.140 ], 00:22:51.140 "driver_specific": {} 00:22:51.140 }' 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:51.140 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.399 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.399 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:51.399 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.399 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.399 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:51.399 18:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:51.658 [2024-07-25 18:49:52.034456] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:51.658 [2024-07-25 18:49:52.034673] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.658 [2024-07-25 18:49:52.034876] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:51.658 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.659 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:51.659 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.659 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.659 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.659 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.659 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.659 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.918 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.918 "name": "Existed_Raid", 00:22:51.918 "uuid": "eec6024d-dce3-4ffe-be11-a8273771f00a", 00:22:51.918 "strip_size_kb": 64, 00:22:51.918 "state": "offline", 00:22:51.918 "raid_level": "raid0", 00:22:51.918 "superblock": true, 00:22:51.918 "num_base_bdevs": 4, 00:22:51.918 "num_base_bdevs_discovered": 3, 00:22:51.918 "num_base_bdevs_operational": 3, 00:22:51.918 "base_bdevs_list": [ 00:22:51.918 { 00:22:51.918 "name": null, 00:22:51.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.918 "is_configured": false, 00:22:51.918 "data_offset": 2048, 00:22:51.918 "data_size": 63488 00:22:51.918 }, 00:22:51.918 { 00:22:51.918 "name": "BaseBdev2", 00:22:51.918 "uuid": "1b45c58a-6383-49c5-87d5-f03ade13e1a4", 00:22:51.918 "is_configured": true, 00:22:51.918 "data_offset": 2048, 00:22:51.918 "data_size": 63488 00:22:51.918 }, 00:22:51.918 { 00:22:51.918 "name": "BaseBdev3", 00:22:51.918 "uuid": "2aa3b080-db3f-48d4-9590-da4bf553a90c", 00:22:51.918 "is_configured": true, 00:22:51.918 "data_offset": 2048, 00:22:51.918 "data_size": 63488 00:22:51.918 }, 00:22:51.918 { 00:22:51.918 "name": "BaseBdev4", 00:22:51.918 "uuid": "b051ad73-bbd9-4b37-80dd-146d0b02ec1a", 00:22:51.918 "is_configured": true, 00:22:51.918 "data_offset": 2048, 00:22:51.918 "data_size": 63488 00:22:51.918 } 00:22:51.918 ] 00:22:51.918 }' 00:22:51.918 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.918 18:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.484 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:52.484 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:52.484 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.484 18:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:52.743 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:52.743 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:52.743 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:53.001 [2024-07-25 18:49:53.418278] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:53.001 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:53.001 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:53.001 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.001 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:53.259 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:53.259 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:53.259 18:49:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:53.518 [2024-07-25 18:49:53.945281] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:53.518 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:53.518 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:53.518 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:53.518 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.776 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:53.776 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:53.776 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:54.033 [2024-07-25 18:49:54.400536] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:54.033 [2024-07-25 18:49:54.400752] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:22:54.033 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:54.033 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:54.033 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.033 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:54.291 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:54.291 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:54.291 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:54.291 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:54.291 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:54.291 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:54.291 BaseBdev2 00:22:54.549 18:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:54.549 18:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:54.549 18:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:54.549 18:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:54.549 18:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:54.549 18:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:54.549 18:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:54.808 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:54.808 [ 00:22:54.808 { 00:22:54.808 "name": "BaseBdev2", 00:22:54.808 "aliases": [ 00:22:54.808 "1b052465-2b37-4454-9cf9-85e54349543f" 00:22:54.808 ], 00:22:54.808 "product_name": "Malloc disk", 00:22:54.808 "block_size": 512, 00:22:54.808 "num_blocks": 65536, 00:22:54.808 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:22:54.808 "assigned_rate_limits": { 00:22:54.808 "rw_ios_per_sec": 0, 00:22:54.808 "rw_mbytes_per_sec": 0, 00:22:54.808 "r_mbytes_per_sec": 0, 00:22:54.808 "w_mbytes_per_sec": 0 00:22:54.808 }, 00:22:54.808 "claimed": false, 00:22:54.808 "zoned": false, 00:22:54.808 "supported_io_types": { 00:22:54.808 "read": true, 00:22:54.808 "write": true, 00:22:54.808 "unmap": true, 00:22:54.808 "flush": true, 00:22:54.808 "reset": true, 00:22:54.808 "nvme_admin": false, 00:22:54.808 "nvme_io": false, 00:22:54.808 "nvme_io_md": false, 00:22:54.808 "write_zeroes": true, 00:22:54.808 "zcopy": true, 00:22:54.808 "get_zone_info": false, 00:22:54.808 "zone_management": false, 00:22:54.808 "zone_append": false, 00:22:54.808 "compare": false, 00:22:54.808 "compare_and_write": false, 00:22:54.808 "abort": true, 00:22:54.808 "seek_hole": false, 00:22:54.808 "seek_data": false, 00:22:54.808 "copy": true, 00:22:54.808 "nvme_iov_md": false 00:22:54.808 }, 00:22:54.808 "memory_domains": [ 00:22:54.808 { 00:22:54.808 "dma_device_id": "system", 00:22:54.808 "dma_device_type": 1 00:22:54.808 }, 00:22:54.808 { 00:22:54.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.808 "dma_device_type": 2 00:22:54.808 } 00:22:54.808 ], 00:22:54.808 "driver_specific": {} 00:22:54.808 } 00:22:54.808 ] 00:22:54.808 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:54.808 18:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:54.808 18:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:54.808 18:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:55.066 BaseBdev3 00:22:55.066 18:49:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:55.066 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:55.066 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:55.066 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:55.066 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:55.066 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:55.066 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:55.324 18:49:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:55.582 [ 00:22:55.582 { 00:22:55.582 "name": "BaseBdev3", 00:22:55.582 "aliases": [ 00:22:55.582 "298c9384-eae2-43f5-ae8b-3bf482e70728" 00:22:55.582 ], 00:22:55.582 "product_name": "Malloc disk", 00:22:55.582 "block_size": 512, 00:22:55.582 "num_blocks": 65536, 00:22:55.582 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:22:55.582 "assigned_rate_limits": { 00:22:55.582 "rw_ios_per_sec": 0, 00:22:55.582 "rw_mbytes_per_sec": 0, 00:22:55.582 "r_mbytes_per_sec": 0, 00:22:55.582 "w_mbytes_per_sec": 0 00:22:55.582 }, 00:22:55.582 "claimed": false, 00:22:55.582 "zoned": false, 00:22:55.582 "supported_io_types": { 00:22:55.582 "read": true, 00:22:55.582 "write": true, 00:22:55.582 "unmap": true, 00:22:55.582 "flush": true, 00:22:55.582 "reset": true, 00:22:55.582 "nvme_admin": false, 00:22:55.582 "nvme_io": false, 00:22:55.582 "nvme_io_md": false, 00:22:55.582 "write_zeroes": true, 00:22:55.582 "zcopy": true, 00:22:55.582 "get_zone_info": false, 00:22:55.582 "zone_management": false, 00:22:55.582 "zone_append": false, 00:22:55.582 "compare": false, 00:22:55.582 "compare_and_write": false, 00:22:55.582 "abort": true, 00:22:55.582 "seek_hole": false, 00:22:55.582 "seek_data": false, 00:22:55.582 "copy": true, 00:22:55.582 "nvme_iov_md": false 00:22:55.582 }, 00:22:55.582 "memory_domains": [ 00:22:55.582 { 00:22:55.582 "dma_device_id": "system", 00:22:55.582 "dma_device_type": 1 00:22:55.582 }, 00:22:55.582 { 00:22:55.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.582 "dma_device_type": 2 00:22:55.582 } 00:22:55.582 ], 00:22:55.582 "driver_specific": {} 00:22:55.582 } 00:22:55.582 ] 00:22:55.582 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:55.582 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:55.582 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:55.582 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:55.840 BaseBdev4 00:22:55.840 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:55.840 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:55.840 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:55.840 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:55.840 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:55.840 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:55.840 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:56.097 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:56.097 [ 00:22:56.097 { 00:22:56.097 "name": "BaseBdev4", 00:22:56.097 "aliases": [ 00:22:56.097 "d5a336a5-14e8-4c83-b40e-5b78412d4d20" 00:22:56.097 ], 00:22:56.097 "product_name": "Malloc disk", 00:22:56.097 "block_size": 512, 00:22:56.097 "num_blocks": 65536, 00:22:56.097 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:22:56.097 "assigned_rate_limits": { 00:22:56.097 "rw_ios_per_sec": 0, 00:22:56.097 "rw_mbytes_per_sec": 0, 00:22:56.097 "r_mbytes_per_sec": 0, 00:22:56.097 "w_mbytes_per_sec": 0 00:22:56.097 }, 00:22:56.097 "claimed": false, 00:22:56.097 "zoned": false, 00:22:56.097 "supported_io_types": { 00:22:56.097 "read": true, 00:22:56.097 "write": true, 00:22:56.097 "unmap": true, 00:22:56.097 "flush": true, 00:22:56.097 "reset": true, 00:22:56.097 "nvme_admin": false, 00:22:56.097 "nvme_io": false, 00:22:56.097 "nvme_io_md": false, 00:22:56.097 "write_zeroes": true, 00:22:56.097 "zcopy": true, 00:22:56.097 "get_zone_info": false, 00:22:56.097 "zone_management": false, 00:22:56.097 "zone_append": false, 00:22:56.097 "compare": false, 00:22:56.097 "compare_and_write": false, 00:22:56.097 "abort": true, 00:22:56.097 "seek_hole": false, 00:22:56.097 "seek_data": false, 00:22:56.097 "copy": true, 00:22:56.097 "nvme_iov_md": false 00:22:56.097 }, 00:22:56.097 "memory_domains": [ 00:22:56.097 { 00:22:56.097 "dma_device_id": "system", 00:22:56.097 "dma_device_type": 1 00:22:56.097 }, 00:22:56.097 { 00:22:56.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.097 "dma_device_type": 2 00:22:56.097 } 00:22:56.097 ], 00:22:56.097 "driver_specific": {} 00:22:56.097 } 00:22:56.097 ] 00:22:56.097 18:49:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:56.097 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:56.097 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:56.097 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:56.355 [2024-07-25 18:49:56.875362] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:56.355 [2024-07-25 18:49:56.875582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:56.355 [2024-07-25 18:49:56.875704] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:56.355 [2024-07-25 18:49:56.877756] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:56.355 [2024-07-25 18:49:56.877968] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.355 18:49:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.613 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.613 "name": "Existed_Raid", 00:22:56.613 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:22:56.613 "strip_size_kb": 64, 00:22:56.613 "state": "configuring", 00:22:56.613 "raid_level": "raid0", 00:22:56.613 "superblock": true, 00:22:56.613 "num_base_bdevs": 4, 00:22:56.613 "num_base_bdevs_discovered": 3, 00:22:56.613 "num_base_bdevs_operational": 4, 00:22:56.613 "base_bdevs_list": [ 00:22:56.613 { 00:22:56.613 "name": "BaseBdev1", 00:22:56.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.613 "is_configured": false, 00:22:56.613 "data_offset": 0, 00:22:56.613 "data_size": 0 00:22:56.613 }, 00:22:56.613 { 00:22:56.613 "name": "BaseBdev2", 00:22:56.613 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:22:56.613 "is_configured": true, 00:22:56.613 "data_offset": 2048, 00:22:56.613 "data_size": 63488 00:22:56.613 }, 00:22:56.613 { 00:22:56.613 "name": "BaseBdev3", 00:22:56.613 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:22:56.614 "is_configured": true, 00:22:56.614 "data_offset": 2048, 00:22:56.614 "data_size": 63488 00:22:56.614 }, 00:22:56.614 { 00:22:56.614 "name": "BaseBdev4", 00:22:56.614 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:22:56.614 "is_configured": true, 00:22:56.614 "data_offset": 2048, 00:22:56.614 "data_size": 63488 00:22:56.614 } 00:22:56.614 ] 00:22:56.614 }' 00:22:56.614 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.614 18:49:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.199 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:57.469 [2024-07-25 18:49:57.915514] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.469 18:49:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.727 18:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:57.727 "name": "Existed_Raid", 00:22:57.727 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:22:57.727 "strip_size_kb": 64, 00:22:57.727 "state": "configuring", 00:22:57.727 "raid_level": "raid0", 00:22:57.727 "superblock": true, 00:22:57.727 "num_base_bdevs": 4, 00:22:57.727 "num_base_bdevs_discovered": 2, 00:22:57.727 "num_base_bdevs_operational": 4, 00:22:57.727 "base_bdevs_list": [ 00:22:57.727 { 00:22:57.727 "name": "BaseBdev1", 00:22:57.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.727 "is_configured": false, 00:22:57.727 "data_offset": 0, 00:22:57.727 "data_size": 0 00:22:57.727 }, 00:22:57.727 { 00:22:57.727 "name": null, 00:22:57.727 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:22:57.727 "is_configured": false, 00:22:57.727 "data_offset": 2048, 00:22:57.727 "data_size": 63488 00:22:57.727 }, 00:22:57.727 { 00:22:57.727 "name": "BaseBdev3", 00:22:57.727 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:22:57.727 "is_configured": true, 00:22:57.727 "data_offset": 2048, 00:22:57.727 "data_size": 63488 00:22:57.727 }, 00:22:57.727 { 00:22:57.727 "name": "BaseBdev4", 00:22:57.727 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:22:57.727 "is_configured": true, 00:22:57.727 "data_offset": 2048, 00:22:57.727 "data_size": 63488 00:22:57.727 } 00:22:57.727 ] 00:22:57.727 }' 00:22:57.727 18:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:57.727 18:49:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.294 18:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.294 18:49:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:58.553 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:58.553 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:58.811 [2024-07-25 18:49:59.309447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:58.811 BaseBdev1 00:22:58.811 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:58.811 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:58.811 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:58.811 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:58.811 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:58.811 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:58.811 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:59.070 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:59.327 [ 00:22:59.327 { 00:22:59.327 "name": "BaseBdev1", 00:22:59.327 "aliases": [ 00:22:59.327 "8ef4d35d-feef-48f5-96f5-1ff3ca923211" 00:22:59.327 ], 00:22:59.327 "product_name": "Malloc disk", 00:22:59.327 "block_size": 512, 00:22:59.327 "num_blocks": 65536, 00:22:59.327 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:22:59.328 "assigned_rate_limits": { 00:22:59.328 "rw_ios_per_sec": 0, 00:22:59.328 "rw_mbytes_per_sec": 0, 00:22:59.328 "r_mbytes_per_sec": 0, 00:22:59.328 "w_mbytes_per_sec": 0 00:22:59.328 }, 00:22:59.328 "claimed": true, 00:22:59.328 "claim_type": "exclusive_write", 00:22:59.328 "zoned": false, 00:22:59.328 "supported_io_types": { 00:22:59.328 "read": true, 00:22:59.328 "write": true, 00:22:59.328 "unmap": true, 00:22:59.328 "flush": true, 00:22:59.328 "reset": true, 00:22:59.328 "nvme_admin": false, 00:22:59.328 "nvme_io": false, 00:22:59.328 "nvme_io_md": false, 00:22:59.328 "write_zeroes": true, 00:22:59.328 "zcopy": true, 00:22:59.328 "get_zone_info": false, 00:22:59.328 "zone_management": false, 00:22:59.328 "zone_append": false, 00:22:59.328 "compare": false, 00:22:59.328 "compare_and_write": false, 00:22:59.328 "abort": true, 00:22:59.328 "seek_hole": false, 00:22:59.328 "seek_data": false, 00:22:59.328 "copy": true, 00:22:59.328 "nvme_iov_md": false 00:22:59.328 }, 00:22:59.328 "memory_domains": [ 00:22:59.328 { 00:22:59.328 "dma_device_id": "system", 00:22:59.328 "dma_device_type": 1 00:22:59.328 }, 00:22:59.328 { 00:22:59.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.328 "dma_device_type": 2 00:22:59.328 } 00:22:59.328 ], 00:22:59.328 "driver_specific": {} 00:22:59.328 } 00:22:59.328 ] 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.328 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.586 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.586 "name": "Existed_Raid", 00:22:59.586 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:22:59.586 "strip_size_kb": 64, 00:22:59.586 "state": "configuring", 00:22:59.586 "raid_level": "raid0", 00:22:59.586 "superblock": true, 00:22:59.586 "num_base_bdevs": 4, 00:22:59.586 "num_base_bdevs_discovered": 3, 00:22:59.586 "num_base_bdevs_operational": 4, 00:22:59.586 "base_bdevs_list": [ 00:22:59.586 { 00:22:59.586 "name": "BaseBdev1", 00:22:59.586 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:22:59.586 "is_configured": true, 00:22:59.586 "data_offset": 2048, 00:22:59.586 "data_size": 63488 00:22:59.586 }, 00:22:59.586 { 00:22:59.586 "name": null, 00:22:59.586 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:22:59.586 "is_configured": false, 00:22:59.586 "data_offset": 2048, 00:22:59.586 "data_size": 63488 00:22:59.586 }, 00:22:59.586 { 00:22:59.586 "name": "BaseBdev3", 00:22:59.586 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:22:59.586 "is_configured": true, 00:22:59.586 "data_offset": 2048, 00:22:59.586 "data_size": 63488 00:22:59.586 }, 00:22:59.586 { 00:22:59.586 "name": "BaseBdev4", 00:22:59.586 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:22:59.586 "is_configured": true, 00:22:59.586 "data_offset": 2048, 00:22:59.586 "data_size": 63488 00:22:59.586 } 00:22:59.586 ] 00:22:59.586 }' 00:22:59.586 18:49:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.586 18:49:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.153 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.153 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:00.153 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:00.153 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:00.412 [2024-07-25 18:50:00.886367] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.412 18:50:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.671 18:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.671 "name": "Existed_Raid", 00:23:00.671 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:23:00.671 "strip_size_kb": 64, 00:23:00.671 "state": "configuring", 00:23:00.671 "raid_level": "raid0", 00:23:00.671 "superblock": true, 00:23:00.671 "num_base_bdevs": 4, 00:23:00.671 "num_base_bdevs_discovered": 2, 00:23:00.671 "num_base_bdevs_operational": 4, 00:23:00.671 "base_bdevs_list": [ 00:23:00.671 { 00:23:00.671 "name": "BaseBdev1", 00:23:00.671 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:00.671 "is_configured": true, 00:23:00.671 "data_offset": 2048, 00:23:00.671 "data_size": 63488 00:23:00.671 }, 00:23:00.671 { 00:23:00.671 "name": null, 00:23:00.671 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:23:00.671 "is_configured": false, 00:23:00.671 "data_offset": 2048, 00:23:00.671 "data_size": 63488 00:23:00.671 }, 00:23:00.671 { 00:23:00.671 "name": null, 00:23:00.671 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:23:00.671 "is_configured": false, 00:23:00.671 "data_offset": 2048, 00:23:00.671 "data_size": 63488 00:23:00.671 }, 00:23:00.671 { 00:23:00.671 "name": "BaseBdev4", 00:23:00.671 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:23:00.671 "is_configured": true, 00:23:00.671 "data_offset": 2048, 00:23:00.671 "data_size": 63488 00:23:00.671 } 00:23:00.671 ] 00:23:00.671 }' 00:23:00.671 18:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.671 18:50:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.238 18:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.238 18:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:01.497 18:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:01.497 18:50:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:01.755 [2024-07-25 18:50:02.210562] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.755 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.014 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.014 "name": "Existed_Raid", 00:23:02.014 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:23:02.014 "strip_size_kb": 64, 00:23:02.014 "state": "configuring", 00:23:02.014 "raid_level": "raid0", 00:23:02.014 "superblock": true, 00:23:02.014 "num_base_bdevs": 4, 00:23:02.014 "num_base_bdevs_discovered": 3, 00:23:02.014 "num_base_bdevs_operational": 4, 00:23:02.014 "base_bdevs_list": [ 00:23:02.014 { 00:23:02.014 "name": "BaseBdev1", 00:23:02.014 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:02.014 "is_configured": true, 00:23:02.014 "data_offset": 2048, 00:23:02.014 "data_size": 63488 00:23:02.014 }, 00:23:02.014 { 00:23:02.014 "name": null, 00:23:02.014 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:23:02.014 "is_configured": false, 00:23:02.014 "data_offset": 2048, 00:23:02.014 "data_size": 63488 00:23:02.014 }, 00:23:02.014 { 00:23:02.014 "name": "BaseBdev3", 00:23:02.014 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:23:02.014 "is_configured": true, 00:23:02.014 "data_offset": 2048, 00:23:02.014 "data_size": 63488 00:23:02.014 }, 00:23:02.014 { 00:23:02.014 "name": "BaseBdev4", 00:23:02.014 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:23:02.014 "is_configured": true, 00:23:02.014 "data_offset": 2048, 00:23:02.014 "data_size": 63488 00:23:02.014 } 00:23:02.014 ] 00:23:02.014 }' 00:23:02.014 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.014 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.582 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.582 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:02.840 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:02.840 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:02.840 [2024-07-25 18:50:03.342811] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.099 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.357 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.357 "name": "Existed_Raid", 00:23:03.357 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:23:03.357 "strip_size_kb": 64, 00:23:03.357 "state": "configuring", 00:23:03.357 "raid_level": "raid0", 00:23:03.357 "superblock": true, 00:23:03.357 "num_base_bdevs": 4, 00:23:03.357 "num_base_bdevs_discovered": 2, 00:23:03.357 "num_base_bdevs_operational": 4, 00:23:03.357 "base_bdevs_list": [ 00:23:03.357 { 00:23:03.357 "name": null, 00:23:03.357 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:03.357 "is_configured": false, 00:23:03.357 "data_offset": 2048, 00:23:03.357 "data_size": 63488 00:23:03.357 }, 00:23:03.357 { 00:23:03.357 "name": null, 00:23:03.357 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:23:03.357 "is_configured": false, 00:23:03.357 "data_offset": 2048, 00:23:03.357 "data_size": 63488 00:23:03.357 }, 00:23:03.357 { 00:23:03.357 "name": "BaseBdev3", 00:23:03.357 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:23:03.357 "is_configured": true, 00:23:03.357 "data_offset": 2048, 00:23:03.357 "data_size": 63488 00:23:03.357 }, 00:23:03.357 { 00:23:03.357 "name": "BaseBdev4", 00:23:03.357 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:23:03.357 "is_configured": true, 00:23:03.357 "data_offset": 2048, 00:23:03.357 "data_size": 63488 00:23:03.357 } 00:23:03.357 ] 00:23:03.357 }' 00:23:03.357 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.357 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.926 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.926 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:04.186 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:04.186 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:04.186 [2024-07-25 18:50:04.742565] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:04.186 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:04.186 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.186 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.445 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.445 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.445 "name": "Existed_Raid", 00:23:04.445 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:23:04.445 "strip_size_kb": 64, 00:23:04.445 "state": "configuring", 00:23:04.445 "raid_level": "raid0", 00:23:04.445 "superblock": true, 00:23:04.445 "num_base_bdevs": 4, 00:23:04.445 "num_base_bdevs_discovered": 3, 00:23:04.445 "num_base_bdevs_operational": 4, 00:23:04.445 "base_bdevs_list": [ 00:23:04.445 { 00:23:04.445 "name": null, 00:23:04.445 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:04.445 "is_configured": false, 00:23:04.445 "data_offset": 2048, 00:23:04.445 "data_size": 63488 00:23:04.445 }, 00:23:04.445 { 00:23:04.445 "name": "BaseBdev2", 00:23:04.445 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:23:04.445 "is_configured": true, 00:23:04.445 "data_offset": 2048, 00:23:04.445 "data_size": 63488 00:23:04.445 }, 00:23:04.445 { 00:23:04.445 "name": "BaseBdev3", 00:23:04.445 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:23:04.445 "is_configured": true, 00:23:04.445 "data_offset": 2048, 00:23:04.445 "data_size": 63488 00:23:04.445 }, 00:23:04.445 { 00:23:04.445 "name": "BaseBdev4", 00:23:04.445 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:23:04.445 "is_configured": true, 00:23:04.445 "data_offset": 2048, 00:23:04.445 "data_size": 63488 00:23:04.445 } 00:23:04.445 ] 00:23:04.445 }' 00:23:04.445 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.445 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.382 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.382 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:05.382 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:05.382 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.382 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:05.642 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8ef4d35d-feef-48f5-96f5-1ff3ca923211 00:23:05.900 [2024-07-25 18:50:06.334194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:05.900 [2024-07-25 18:50:06.335618] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:23:05.900 [2024-07-25 18:50:06.335686] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:05.900 [2024-07-25 18:50:06.335903] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:05.900 [2024-07-25 18:50:06.336275] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:23:05.901 [2024-07-25 18:50:06.336318] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:23:05.901 [2024-07-25 18:50:06.336543] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.901 NewBaseBdev 00:23:05.901 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:05.901 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:23:05.901 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:05.901 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:05.901 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:05.901 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:05.901 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.160 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:06.419 [ 00:23:06.419 { 00:23:06.419 "name": "NewBaseBdev", 00:23:06.419 "aliases": [ 00:23:06.419 "8ef4d35d-feef-48f5-96f5-1ff3ca923211" 00:23:06.419 ], 00:23:06.419 "product_name": "Malloc disk", 00:23:06.419 "block_size": 512, 00:23:06.419 "num_blocks": 65536, 00:23:06.419 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:06.419 "assigned_rate_limits": { 00:23:06.419 "rw_ios_per_sec": 0, 00:23:06.419 "rw_mbytes_per_sec": 0, 00:23:06.419 "r_mbytes_per_sec": 0, 00:23:06.419 "w_mbytes_per_sec": 0 00:23:06.419 }, 00:23:06.419 "claimed": true, 00:23:06.419 "claim_type": "exclusive_write", 00:23:06.419 "zoned": false, 00:23:06.419 "supported_io_types": { 00:23:06.419 "read": true, 00:23:06.419 "write": true, 00:23:06.419 "unmap": true, 00:23:06.419 "flush": true, 00:23:06.419 "reset": true, 00:23:06.419 "nvme_admin": false, 00:23:06.419 "nvme_io": false, 00:23:06.419 "nvme_io_md": false, 00:23:06.419 "write_zeroes": true, 00:23:06.419 "zcopy": true, 00:23:06.419 "get_zone_info": false, 00:23:06.419 "zone_management": false, 00:23:06.419 "zone_append": false, 00:23:06.419 "compare": false, 00:23:06.419 "compare_and_write": false, 00:23:06.419 "abort": true, 00:23:06.419 "seek_hole": false, 00:23:06.419 "seek_data": false, 00:23:06.419 "copy": true, 00:23:06.419 "nvme_iov_md": false 00:23:06.419 }, 00:23:06.419 "memory_domains": [ 00:23:06.419 { 00:23:06.419 "dma_device_id": "system", 00:23:06.419 "dma_device_type": 1 00:23:06.419 }, 00:23:06.419 { 00:23:06.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.419 "dma_device_type": 2 00:23:06.419 } 00:23:06.419 ], 00:23:06.419 "driver_specific": {} 00:23:06.419 } 00:23:06.419 ] 00:23:06.419 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.420 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.679 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.679 "name": "Existed_Raid", 00:23:06.679 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:23:06.679 "strip_size_kb": 64, 00:23:06.679 "state": "online", 00:23:06.679 "raid_level": "raid0", 00:23:06.679 "superblock": true, 00:23:06.679 "num_base_bdevs": 4, 00:23:06.679 "num_base_bdevs_discovered": 4, 00:23:06.679 "num_base_bdevs_operational": 4, 00:23:06.679 "base_bdevs_list": [ 00:23:06.679 { 00:23:06.679 "name": "NewBaseBdev", 00:23:06.679 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:06.679 "is_configured": true, 00:23:06.680 "data_offset": 2048, 00:23:06.680 "data_size": 63488 00:23:06.680 }, 00:23:06.680 { 00:23:06.680 "name": "BaseBdev2", 00:23:06.680 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:23:06.680 "is_configured": true, 00:23:06.680 "data_offset": 2048, 00:23:06.680 "data_size": 63488 00:23:06.680 }, 00:23:06.680 { 00:23:06.680 "name": "BaseBdev3", 00:23:06.680 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:23:06.680 "is_configured": true, 00:23:06.680 "data_offset": 2048, 00:23:06.680 "data_size": 63488 00:23:06.680 }, 00:23:06.680 { 00:23:06.680 "name": "BaseBdev4", 00:23:06.680 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:23:06.680 "is_configured": true, 00:23:06.680 "data_offset": 2048, 00:23:06.680 "data_size": 63488 00:23:06.680 } 00:23:06.680 ] 00:23:06.680 }' 00:23:06.680 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.680 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:07.248 [2024-07-25 18:50:07.738370] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:07.248 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:07.248 "name": "Existed_Raid", 00:23:07.248 "aliases": [ 00:23:07.248 "3d434c7b-0506-4f16-835c-75aff9b30b35" 00:23:07.248 ], 00:23:07.248 "product_name": "Raid Volume", 00:23:07.248 "block_size": 512, 00:23:07.248 "num_blocks": 253952, 00:23:07.248 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:23:07.248 "assigned_rate_limits": { 00:23:07.248 "rw_ios_per_sec": 0, 00:23:07.248 "rw_mbytes_per_sec": 0, 00:23:07.248 "r_mbytes_per_sec": 0, 00:23:07.248 "w_mbytes_per_sec": 0 00:23:07.248 }, 00:23:07.248 "claimed": false, 00:23:07.248 "zoned": false, 00:23:07.248 "supported_io_types": { 00:23:07.248 "read": true, 00:23:07.248 "write": true, 00:23:07.248 "unmap": true, 00:23:07.248 "flush": true, 00:23:07.248 "reset": true, 00:23:07.248 "nvme_admin": false, 00:23:07.248 "nvme_io": false, 00:23:07.248 "nvme_io_md": false, 00:23:07.248 "write_zeroes": true, 00:23:07.248 "zcopy": false, 00:23:07.248 "get_zone_info": false, 00:23:07.248 "zone_management": false, 00:23:07.248 "zone_append": false, 00:23:07.248 "compare": false, 00:23:07.248 "compare_and_write": false, 00:23:07.248 "abort": false, 00:23:07.248 "seek_hole": false, 00:23:07.248 "seek_data": false, 00:23:07.248 "copy": false, 00:23:07.248 "nvme_iov_md": false 00:23:07.248 }, 00:23:07.248 "memory_domains": [ 00:23:07.248 { 00:23:07.248 "dma_device_id": "system", 00:23:07.248 "dma_device_type": 1 00:23:07.248 }, 00:23:07.248 { 00:23:07.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.248 "dma_device_type": 2 00:23:07.248 }, 00:23:07.248 { 00:23:07.248 "dma_device_id": "system", 00:23:07.248 "dma_device_type": 1 00:23:07.248 }, 00:23:07.248 { 00:23:07.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.248 "dma_device_type": 2 00:23:07.248 }, 00:23:07.248 { 00:23:07.248 "dma_device_id": "system", 00:23:07.248 "dma_device_type": 1 00:23:07.248 }, 00:23:07.248 { 00:23:07.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.248 "dma_device_type": 2 00:23:07.248 }, 00:23:07.248 { 00:23:07.248 "dma_device_id": "system", 00:23:07.248 "dma_device_type": 1 00:23:07.248 }, 00:23:07.248 { 00:23:07.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.248 "dma_device_type": 2 00:23:07.248 } 00:23:07.248 ], 00:23:07.248 "driver_specific": { 00:23:07.248 "raid": { 00:23:07.248 "uuid": "3d434c7b-0506-4f16-835c-75aff9b30b35", 00:23:07.248 "strip_size_kb": 64, 00:23:07.248 "state": "online", 00:23:07.249 "raid_level": "raid0", 00:23:07.249 "superblock": true, 00:23:07.249 "num_base_bdevs": 4, 00:23:07.249 "num_base_bdevs_discovered": 4, 00:23:07.249 "num_base_bdevs_operational": 4, 00:23:07.249 "base_bdevs_list": [ 00:23:07.249 { 00:23:07.249 "name": "NewBaseBdev", 00:23:07.249 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:07.249 "is_configured": true, 00:23:07.249 "data_offset": 2048, 00:23:07.249 "data_size": 63488 00:23:07.249 }, 00:23:07.249 { 00:23:07.249 "name": "BaseBdev2", 00:23:07.249 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:23:07.249 "is_configured": true, 00:23:07.249 "data_offset": 2048, 00:23:07.249 "data_size": 63488 00:23:07.249 }, 00:23:07.249 { 00:23:07.249 "name": "BaseBdev3", 00:23:07.249 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:23:07.249 "is_configured": true, 00:23:07.249 "data_offset": 2048, 00:23:07.249 "data_size": 63488 00:23:07.249 }, 00:23:07.249 { 00:23:07.249 "name": "BaseBdev4", 00:23:07.249 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:23:07.249 "is_configured": true, 00:23:07.249 "data_offset": 2048, 00:23:07.249 "data_size": 63488 00:23:07.249 } 00:23:07.249 ] 00:23:07.249 } 00:23:07.249 } 00:23:07.249 }' 00:23:07.249 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:07.249 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:07.249 BaseBdev2 00:23:07.249 BaseBdev3 00:23:07.249 BaseBdev4' 00:23:07.249 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:07.249 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:07.249 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:07.508 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:07.508 "name": "NewBaseBdev", 00:23:07.508 "aliases": [ 00:23:07.508 "8ef4d35d-feef-48f5-96f5-1ff3ca923211" 00:23:07.508 ], 00:23:07.508 "product_name": "Malloc disk", 00:23:07.508 "block_size": 512, 00:23:07.508 "num_blocks": 65536, 00:23:07.508 "uuid": "8ef4d35d-feef-48f5-96f5-1ff3ca923211", 00:23:07.508 "assigned_rate_limits": { 00:23:07.508 "rw_ios_per_sec": 0, 00:23:07.508 "rw_mbytes_per_sec": 0, 00:23:07.508 "r_mbytes_per_sec": 0, 00:23:07.508 "w_mbytes_per_sec": 0 00:23:07.508 }, 00:23:07.508 "claimed": true, 00:23:07.508 "claim_type": "exclusive_write", 00:23:07.508 "zoned": false, 00:23:07.508 "supported_io_types": { 00:23:07.508 "read": true, 00:23:07.508 "write": true, 00:23:07.508 "unmap": true, 00:23:07.508 "flush": true, 00:23:07.508 "reset": true, 00:23:07.508 "nvme_admin": false, 00:23:07.508 "nvme_io": false, 00:23:07.508 "nvme_io_md": false, 00:23:07.508 "write_zeroes": true, 00:23:07.508 "zcopy": true, 00:23:07.508 "get_zone_info": false, 00:23:07.508 "zone_management": false, 00:23:07.508 "zone_append": false, 00:23:07.508 "compare": false, 00:23:07.508 "compare_and_write": false, 00:23:07.508 "abort": true, 00:23:07.508 "seek_hole": false, 00:23:07.508 "seek_data": false, 00:23:07.508 "copy": true, 00:23:07.508 "nvme_iov_md": false 00:23:07.508 }, 00:23:07.508 "memory_domains": [ 00:23:07.508 { 00:23:07.508 "dma_device_id": "system", 00:23:07.508 "dma_device_type": 1 00:23:07.508 }, 00:23:07.508 { 00:23:07.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.508 "dma_device_type": 2 00:23:07.508 } 00:23:07.508 ], 00:23:07.508 "driver_specific": {} 00:23:07.508 }' 00:23:07.508 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:07.765 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:07.765 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:07.765 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:07.765 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:07.765 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:07.765 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:07.765 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:07.766 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:07.766 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.023 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.023 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:08.023 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:08.023 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:08.023 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:08.023 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:08.023 "name": "BaseBdev2", 00:23:08.023 "aliases": [ 00:23:08.023 "1b052465-2b37-4454-9cf9-85e54349543f" 00:23:08.023 ], 00:23:08.023 "product_name": "Malloc disk", 00:23:08.023 "block_size": 512, 00:23:08.023 "num_blocks": 65536, 00:23:08.023 "uuid": "1b052465-2b37-4454-9cf9-85e54349543f", 00:23:08.023 "assigned_rate_limits": { 00:23:08.023 "rw_ios_per_sec": 0, 00:23:08.023 "rw_mbytes_per_sec": 0, 00:23:08.023 "r_mbytes_per_sec": 0, 00:23:08.023 "w_mbytes_per_sec": 0 00:23:08.023 }, 00:23:08.023 "claimed": true, 00:23:08.023 "claim_type": "exclusive_write", 00:23:08.023 "zoned": false, 00:23:08.023 "supported_io_types": { 00:23:08.023 "read": true, 00:23:08.023 "write": true, 00:23:08.023 "unmap": true, 00:23:08.023 "flush": true, 00:23:08.023 "reset": true, 00:23:08.023 "nvme_admin": false, 00:23:08.023 "nvme_io": false, 00:23:08.023 "nvme_io_md": false, 00:23:08.023 "write_zeroes": true, 00:23:08.023 "zcopy": true, 00:23:08.023 "get_zone_info": false, 00:23:08.023 "zone_management": false, 00:23:08.023 "zone_append": false, 00:23:08.023 "compare": false, 00:23:08.023 "compare_and_write": false, 00:23:08.023 "abort": true, 00:23:08.023 "seek_hole": false, 00:23:08.023 "seek_data": false, 00:23:08.023 "copy": true, 00:23:08.023 "nvme_iov_md": false 00:23:08.023 }, 00:23:08.023 "memory_domains": [ 00:23:08.023 { 00:23:08.023 "dma_device_id": "system", 00:23:08.023 "dma_device_type": 1 00:23:08.023 }, 00:23:08.023 { 00:23:08.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.023 "dma_device_type": 2 00:23:08.023 } 00:23:08.023 ], 00:23:08.023 "driver_specific": {} 00:23:08.023 }' 00:23:08.023 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:08.281 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:08.539 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:08.539 "name": "BaseBdev3", 00:23:08.539 "aliases": [ 00:23:08.539 "298c9384-eae2-43f5-ae8b-3bf482e70728" 00:23:08.539 ], 00:23:08.539 "product_name": "Malloc disk", 00:23:08.539 "block_size": 512, 00:23:08.539 "num_blocks": 65536, 00:23:08.539 "uuid": "298c9384-eae2-43f5-ae8b-3bf482e70728", 00:23:08.539 "assigned_rate_limits": { 00:23:08.539 "rw_ios_per_sec": 0, 00:23:08.539 "rw_mbytes_per_sec": 0, 00:23:08.539 "r_mbytes_per_sec": 0, 00:23:08.539 "w_mbytes_per_sec": 0 00:23:08.539 }, 00:23:08.539 "claimed": true, 00:23:08.539 "claim_type": "exclusive_write", 00:23:08.539 "zoned": false, 00:23:08.539 "supported_io_types": { 00:23:08.539 "read": true, 00:23:08.539 "write": true, 00:23:08.539 "unmap": true, 00:23:08.539 "flush": true, 00:23:08.539 "reset": true, 00:23:08.539 "nvme_admin": false, 00:23:08.539 "nvme_io": false, 00:23:08.539 "nvme_io_md": false, 00:23:08.539 "write_zeroes": true, 00:23:08.539 "zcopy": true, 00:23:08.539 "get_zone_info": false, 00:23:08.539 "zone_management": false, 00:23:08.539 "zone_append": false, 00:23:08.539 "compare": false, 00:23:08.539 "compare_and_write": false, 00:23:08.539 "abort": true, 00:23:08.539 "seek_hole": false, 00:23:08.539 "seek_data": false, 00:23:08.539 "copy": true, 00:23:08.539 "nvme_iov_md": false 00:23:08.539 }, 00:23:08.539 "memory_domains": [ 00:23:08.539 { 00:23:08.539 "dma_device_id": "system", 00:23:08.539 "dma_device_type": 1 00:23:08.539 }, 00:23:08.539 { 00:23:08.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.539 "dma_device_type": 2 00:23:08.539 } 00:23:08.539 ], 00:23:08.539 "driver_specific": {} 00:23:08.539 }' 00:23:08.539 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.539 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:08.539 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:08.539 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.797 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:08.797 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:08.797 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.797 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:08.797 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:08.797 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:08.797 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.056 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.056 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:09.056 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:09.056 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:09.315 "name": "BaseBdev4", 00:23:09.315 "aliases": [ 00:23:09.315 "d5a336a5-14e8-4c83-b40e-5b78412d4d20" 00:23:09.315 ], 00:23:09.315 "product_name": "Malloc disk", 00:23:09.315 "block_size": 512, 00:23:09.315 "num_blocks": 65536, 00:23:09.315 "uuid": "d5a336a5-14e8-4c83-b40e-5b78412d4d20", 00:23:09.315 "assigned_rate_limits": { 00:23:09.315 "rw_ios_per_sec": 0, 00:23:09.315 "rw_mbytes_per_sec": 0, 00:23:09.315 "r_mbytes_per_sec": 0, 00:23:09.315 "w_mbytes_per_sec": 0 00:23:09.315 }, 00:23:09.315 "claimed": true, 00:23:09.315 "claim_type": "exclusive_write", 00:23:09.315 "zoned": false, 00:23:09.315 "supported_io_types": { 00:23:09.315 "read": true, 00:23:09.315 "write": true, 00:23:09.315 "unmap": true, 00:23:09.315 "flush": true, 00:23:09.315 "reset": true, 00:23:09.315 "nvme_admin": false, 00:23:09.315 "nvme_io": false, 00:23:09.315 "nvme_io_md": false, 00:23:09.315 "write_zeroes": true, 00:23:09.315 "zcopy": true, 00:23:09.315 "get_zone_info": false, 00:23:09.315 "zone_management": false, 00:23:09.315 "zone_append": false, 00:23:09.315 "compare": false, 00:23:09.315 "compare_and_write": false, 00:23:09.315 "abort": true, 00:23:09.315 "seek_hole": false, 00:23:09.315 "seek_data": false, 00:23:09.315 "copy": true, 00:23:09.315 "nvme_iov_md": false 00:23:09.315 }, 00:23:09.315 "memory_domains": [ 00:23:09.315 { 00:23:09.315 "dma_device_id": "system", 00:23:09.315 "dma_device_type": 1 00:23:09.315 }, 00:23:09.315 { 00:23:09.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.315 "dma_device_type": 2 00:23:09.315 } 00:23:09.315 ], 00:23:09.315 "driver_specific": {} 00:23:09.315 }' 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:09.315 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.574 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:09.574 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:09.574 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.574 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:09.574 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:09.574 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:09.833 [2024-07-25 18:50:10.237376] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:09.833 [2024-07-25 18:50:10.237603] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:09.833 [2024-07-25 18:50:10.237858] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:09.833 [2024-07-25 18:50:10.238021] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:09.833 [2024-07-25 18:50:10.238097] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 134771 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 134771 ']' 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 134771 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134771 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.833 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134771' 00:23:09.833 killing process with pid 134771 00:23:09.834 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 134771 00:23:09.834 [2024-07-25 18:50:10.290989] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:09.834 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 134771 00:23:10.091 [2024-07-25 18:50:10.624337] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:11.475 ************************************ 00:23:11.475 END TEST raid_state_function_test_sb 00:23:11.475 ************************************ 00:23:11.475 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:11.475 00:23:11.475 real 0m32.222s 00:23:11.475 user 0m57.540s 00:23:11.475 sys 0m5.568s 00:23:11.475 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:11.475 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.475 18:50:11 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:23:11.475 18:50:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:11.475 18:50:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.475 18:50:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:11.475 ************************************ 00:23:11.475 START TEST raid_superblock_test 00:23:11.475 ************************************ 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=135855 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 135855 /var/tmp/spdk-raid.sock 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 135855 ']' 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:11.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.475 18:50:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.475 [2024-07-25 18:50:11.958361] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:11.475 [2024-07-25 18:50:11.958761] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135855 ] 00:23:11.748 [2024-07-25 18:50:12.129807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.006 [2024-07-25 18:50:12.384406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.006 [2024-07-25 18:50:12.574790] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:12.573 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:12.831 malloc1 00:23:12.831 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:13.088 [2024-07-25 18:50:13.424024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:13.088 [2024-07-25 18:50:13.424258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.088 [2024-07-25 18:50:13.424409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:13.088 [2024-07-25 18:50:13.424521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.088 [2024-07-25 18:50:13.427245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.088 [2024-07-25 18:50:13.427409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:13.088 pt1 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:13.088 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:13.345 malloc2 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:13.345 [2024-07-25 18:50:13.879439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:13.345 [2024-07-25 18:50:13.879739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.345 [2024-07-25 18:50:13.879827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:13.345 [2024-07-25 18:50:13.880022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.345 [2024-07-25 18:50:13.882767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.345 [2024-07-25 18:50:13.882951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:13.345 pt2 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:13.345 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:13.603 malloc3 00:23:13.603 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:13.861 [2024-07-25 18:50:14.285973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:13.861 [2024-07-25 18:50:14.286198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.862 [2024-07-25 18:50:14.286326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:13.862 [2024-07-25 18:50:14.286426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.862 [2024-07-25 18:50:14.289017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.862 [2024-07-25 18:50:14.289178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:13.862 pt3 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:13.862 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:14.119 malloc4 00:23:14.119 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:14.377 [2024-07-25 18:50:14.760376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:14.377 [2024-07-25 18:50:14.760632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.377 [2024-07-25 18:50:14.760784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:14.377 [2024-07-25 18:50:14.760902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.377 [2024-07-25 18:50:14.763703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.377 [2024-07-25 18:50:14.763866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:14.377 pt4 00:23:14.377 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:14.377 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:14.377 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:14.377 [2024-07-25 18:50:14.944556] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:14.377 [2024-07-25 18:50:14.947059] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:14.377 [2024-07-25 18:50:14.947241] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:14.377 [2024-07-25 18:50:14.947355] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:14.377 [2024-07-25 18:50:14.947569] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:23:14.377 [2024-07-25 18:50:14.947667] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:14.377 [2024-07-25 18:50:14.947862] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:14.377 [2024-07-25 18:50:14.948345] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:23:14.377 [2024-07-25 18:50:14.948448] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:23:14.377 [2024-07-25 18:50:14.948747] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.635 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.635 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.636 "name": "raid_bdev1", 00:23:14.636 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:14.636 "strip_size_kb": 64, 00:23:14.636 "state": "online", 00:23:14.636 "raid_level": "raid0", 00:23:14.636 "superblock": true, 00:23:14.636 "num_base_bdevs": 4, 00:23:14.636 "num_base_bdevs_discovered": 4, 00:23:14.636 "num_base_bdevs_operational": 4, 00:23:14.636 "base_bdevs_list": [ 00:23:14.636 { 00:23:14.636 "name": "pt1", 00:23:14.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:14.636 "is_configured": true, 00:23:14.636 "data_offset": 2048, 00:23:14.636 "data_size": 63488 00:23:14.636 }, 00:23:14.636 { 00:23:14.636 "name": "pt2", 00:23:14.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.636 "is_configured": true, 00:23:14.636 "data_offset": 2048, 00:23:14.636 "data_size": 63488 00:23:14.636 }, 00:23:14.636 { 00:23:14.636 "name": "pt3", 00:23:14.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:14.636 "is_configured": true, 00:23:14.636 "data_offset": 2048, 00:23:14.636 "data_size": 63488 00:23:14.636 }, 00:23:14.636 { 00:23:14.636 "name": "pt4", 00:23:14.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:14.636 "is_configured": true, 00:23:14.636 "data_offset": 2048, 00:23:14.636 "data_size": 63488 00:23:14.636 } 00:23:14.636 ] 00:23:14.636 }' 00:23:14.636 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.636 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:15.201 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:15.460 [2024-07-25 18:50:15.877150] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.460 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:15.460 "name": "raid_bdev1", 00:23:15.460 "aliases": [ 00:23:15.460 "7f03ac0f-1728-4009-93bb-d9a817e0033d" 00:23:15.460 ], 00:23:15.460 "product_name": "Raid Volume", 00:23:15.460 "block_size": 512, 00:23:15.460 "num_blocks": 253952, 00:23:15.460 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:15.460 "assigned_rate_limits": { 00:23:15.460 "rw_ios_per_sec": 0, 00:23:15.460 "rw_mbytes_per_sec": 0, 00:23:15.460 "r_mbytes_per_sec": 0, 00:23:15.460 "w_mbytes_per_sec": 0 00:23:15.460 }, 00:23:15.460 "claimed": false, 00:23:15.460 "zoned": false, 00:23:15.460 "supported_io_types": { 00:23:15.460 "read": true, 00:23:15.460 "write": true, 00:23:15.460 "unmap": true, 00:23:15.460 "flush": true, 00:23:15.460 "reset": true, 00:23:15.460 "nvme_admin": false, 00:23:15.460 "nvme_io": false, 00:23:15.460 "nvme_io_md": false, 00:23:15.460 "write_zeroes": true, 00:23:15.460 "zcopy": false, 00:23:15.460 "get_zone_info": false, 00:23:15.460 "zone_management": false, 00:23:15.460 "zone_append": false, 00:23:15.460 "compare": false, 00:23:15.460 "compare_and_write": false, 00:23:15.460 "abort": false, 00:23:15.460 "seek_hole": false, 00:23:15.460 "seek_data": false, 00:23:15.460 "copy": false, 00:23:15.460 "nvme_iov_md": false 00:23:15.460 }, 00:23:15.460 "memory_domains": [ 00:23:15.460 { 00:23:15.460 "dma_device_id": "system", 00:23:15.460 "dma_device_type": 1 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.460 "dma_device_type": 2 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "dma_device_id": "system", 00:23:15.460 "dma_device_type": 1 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.460 "dma_device_type": 2 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "dma_device_id": "system", 00:23:15.460 "dma_device_type": 1 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.460 "dma_device_type": 2 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "dma_device_id": "system", 00:23:15.460 "dma_device_type": 1 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.460 "dma_device_type": 2 00:23:15.460 } 00:23:15.460 ], 00:23:15.460 "driver_specific": { 00:23:15.460 "raid": { 00:23:15.460 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:15.460 "strip_size_kb": 64, 00:23:15.460 "state": "online", 00:23:15.460 "raid_level": "raid0", 00:23:15.460 "superblock": true, 00:23:15.460 "num_base_bdevs": 4, 00:23:15.460 "num_base_bdevs_discovered": 4, 00:23:15.460 "num_base_bdevs_operational": 4, 00:23:15.460 "base_bdevs_list": [ 00:23:15.460 { 00:23:15.460 "name": "pt1", 00:23:15.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.460 "is_configured": true, 00:23:15.460 "data_offset": 2048, 00:23:15.460 "data_size": 63488 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "name": "pt2", 00:23:15.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.460 "is_configured": true, 00:23:15.460 "data_offset": 2048, 00:23:15.460 "data_size": 63488 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "name": "pt3", 00:23:15.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:15.460 "is_configured": true, 00:23:15.460 "data_offset": 2048, 00:23:15.460 "data_size": 63488 00:23:15.460 }, 00:23:15.460 { 00:23:15.460 "name": "pt4", 00:23:15.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:15.460 "is_configured": true, 00:23:15.460 "data_offset": 2048, 00:23:15.460 "data_size": 63488 00:23:15.460 } 00:23:15.460 ] 00:23:15.460 } 00:23:15.460 } 00:23:15.460 }' 00:23:15.460 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:15.460 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:15.460 pt2 00:23:15.460 pt3 00:23:15.460 pt4' 00:23:15.460 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.460 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:15.460 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.718 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.718 "name": "pt1", 00:23:15.718 "aliases": [ 00:23:15.718 "00000000-0000-0000-0000-000000000001" 00:23:15.718 ], 00:23:15.718 "product_name": "passthru", 00:23:15.718 "block_size": 512, 00:23:15.718 "num_blocks": 65536, 00:23:15.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.718 "assigned_rate_limits": { 00:23:15.718 "rw_ios_per_sec": 0, 00:23:15.718 "rw_mbytes_per_sec": 0, 00:23:15.718 "r_mbytes_per_sec": 0, 00:23:15.718 "w_mbytes_per_sec": 0 00:23:15.718 }, 00:23:15.718 "claimed": true, 00:23:15.718 "claim_type": "exclusive_write", 00:23:15.718 "zoned": false, 00:23:15.718 "supported_io_types": { 00:23:15.718 "read": true, 00:23:15.718 "write": true, 00:23:15.718 "unmap": true, 00:23:15.718 "flush": true, 00:23:15.718 "reset": true, 00:23:15.718 "nvme_admin": false, 00:23:15.718 "nvme_io": false, 00:23:15.718 "nvme_io_md": false, 00:23:15.718 "write_zeroes": true, 00:23:15.718 "zcopy": true, 00:23:15.718 "get_zone_info": false, 00:23:15.718 "zone_management": false, 00:23:15.718 "zone_append": false, 00:23:15.718 "compare": false, 00:23:15.718 "compare_and_write": false, 00:23:15.718 "abort": true, 00:23:15.718 "seek_hole": false, 00:23:15.718 "seek_data": false, 00:23:15.718 "copy": true, 00:23:15.718 "nvme_iov_md": false 00:23:15.718 }, 00:23:15.718 "memory_domains": [ 00:23:15.718 { 00:23:15.718 "dma_device_id": "system", 00:23:15.718 "dma_device_type": 1 00:23:15.718 }, 00:23:15.718 { 00:23:15.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.718 "dma_device_type": 2 00:23:15.718 } 00:23:15.718 ], 00:23:15.718 "driver_specific": { 00:23:15.718 "passthru": { 00:23:15.718 "name": "pt1", 00:23:15.718 "base_bdev_name": "malloc1" 00:23:15.718 } 00:23:15.718 } 00:23:15.718 }' 00:23:15.718 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.718 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.718 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:15.718 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:15.977 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.235 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.235 "name": "pt2", 00:23:16.235 "aliases": [ 00:23:16.235 "00000000-0000-0000-0000-000000000002" 00:23:16.235 ], 00:23:16.235 "product_name": "passthru", 00:23:16.235 "block_size": 512, 00:23:16.235 "num_blocks": 65536, 00:23:16.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:16.235 "assigned_rate_limits": { 00:23:16.235 "rw_ios_per_sec": 0, 00:23:16.235 "rw_mbytes_per_sec": 0, 00:23:16.235 "r_mbytes_per_sec": 0, 00:23:16.235 "w_mbytes_per_sec": 0 00:23:16.235 }, 00:23:16.235 "claimed": true, 00:23:16.235 "claim_type": "exclusive_write", 00:23:16.235 "zoned": false, 00:23:16.235 "supported_io_types": { 00:23:16.235 "read": true, 00:23:16.235 "write": true, 00:23:16.235 "unmap": true, 00:23:16.235 "flush": true, 00:23:16.235 "reset": true, 00:23:16.235 "nvme_admin": false, 00:23:16.235 "nvme_io": false, 00:23:16.235 "nvme_io_md": false, 00:23:16.235 "write_zeroes": true, 00:23:16.235 "zcopy": true, 00:23:16.235 "get_zone_info": false, 00:23:16.235 "zone_management": false, 00:23:16.235 "zone_append": false, 00:23:16.235 "compare": false, 00:23:16.235 "compare_and_write": false, 00:23:16.235 "abort": true, 00:23:16.235 "seek_hole": false, 00:23:16.235 "seek_data": false, 00:23:16.235 "copy": true, 00:23:16.235 "nvme_iov_md": false 00:23:16.235 }, 00:23:16.235 "memory_domains": [ 00:23:16.235 { 00:23:16.235 "dma_device_id": "system", 00:23:16.235 "dma_device_type": 1 00:23:16.235 }, 00:23:16.235 { 00:23:16.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.235 "dma_device_type": 2 00:23:16.235 } 00:23:16.235 ], 00:23:16.235 "driver_specific": { 00:23:16.235 "passthru": { 00:23:16.235 "name": "pt2", 00:23:16.235 "base_bdev_name": "malloc2" 00:23:16.235 } 00:23:16.235 } 00:23:16.235 }' 00:23:16.235 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.235 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.235 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.235 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.235 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.493 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:16.493 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.751 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.751 "name": "pt3", 00:23:16.751 "aliases": [ 00:23:16.751 "00000000-0000-0000-0000-000000000003" 00:23:16.751 ], 00:23:16.751 "product_name": "passthru", 00:23:16.751 "block_size": 512, 00:23:16.751 "num_blocks": 65536, 00:23:16.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:16.751 "assigned_rate_limits": { 00:23:16.751 "rw_ios_per_sec": 0, 00:23:16.751 "rw_mbytes_per_sec": 0, 00:23:16.751 "r_mbytes_per_sec": 0, 00:23:16.751 "w_mbytes_per_sec": 0 00:23:16.751 }, 00:23:16.751 "claimed": true, 00:23:16.751 "claim_type": "exclusive_write", 00:23:16.751 "zoned": false, 00:23:16.751 "supported_io_types": { 00:23:16.751 "read": true, 00:23:16.751 "write": true, 00:23:16.751 "unmap": true, 00:23:16.751 "flush": true, 00:23:16.751 "reset": true, 00:23:16.751 "nvme_admin": false, 00:23:16.751 "nvme_io": false, 00:23:16.751 "nvme_io_md": false, 00:23:16.751 "write_zeroes": true, 00:23:16.751 "zcopy": true, 00:23:16.751 "get_zone_info": false, 00:23:16.751 "zone_management": false, 00:23:16.751 "zone_append": false, 00:23:16.751 "compare": false, 00:23:16.751 "compare_and_write": false, 00:23:16.751 "abort": true, 00:23:16.751 "seek_hole": false, 00:23:16.751 "seek_data": false, 00:23:16.751 "copy": true, 00:23:16.752 "nvme_iov_md": false 00:23:16.752 }, 00:23:16.752 "memory_domains": [ 00:23:16.752 { 00:23:16.752 "dma_device_id": "system", 00:23:16.752 "dma_device_type": 1 00:23:16.752 }, 00:23:16.752 { 00:23:16.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.752 "dma_device_type": 2 00:23:16.752 } 00:23:16.752 ], 00:23:16.752 "driver_specific": { 00:23:16.752 "passthru": { 00:23:16.752 "name": "pt3", 00:23:16.752 "base_bdev_name": "malloc3" 00:23:16.752 } 00:23:16.752 } 00:23:16.752 }' 00:23:16.752 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.752 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.752 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.752 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:17.010 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:17.268 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:17.268 "name": "pt4", 00:23:17.268 "aliases": [ 00:23:17.268 "00000000-0000-0000-0000-000000000004" 00:23:17.268 ], 00:23:17.268 "product_name": "passthru", 00:23:17.268 "block_size": 512, 00:23:17.268 "num_blocks": 65536, 00:23:17.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:17.268 "assigned_rate_limits": { 00:23:17.268 "rw_ios_per_sec": 0, 00:23:17.268 "rw_mbytes_per_sec": 0, 00:23:17.268 "r_mbytes_per_sec": 0, 00:23:17.268 "w_mbytes_per_sec": 0 00:23:17.268 }, 00:23:17.268 "claimed": true, 00:23:17.268 "claim_type": "exclusive_write", 00:23:17.268 "zoned": false, 00:23:17.268 "supported_io_types": { 00:23:17.268 "read": true, 00:23:17.268 "write": true, 00:23:17.268 "unmap": true, 00:23:17.268 "flush": true, 00:23:17.268 "reset": true, 00:23:17.268 "nvme_admin": false, 00:23:17.268 "nvme_io": false, 00:23:17.268 "nvme_io_md": false, 00:23:17.268 "write_zeroes": true, 00:23:17.268 "zcopy": true, 00:23:17.268 "get_zone_info": false, 00:23:17.268 "zone_management": false, 00:23:17.268 "zone_append": false, 00:23:17.268 "compare": false, 00:23:17.268 "compare_and_write": false, 00:23:17.268 "abort": true, 00:23:17.268 "seek_hole": false, 00:23:17.268 "seek_data": false, 00:23:17.268 "copy": true, 00:23:17.268 "nvme_iov_md": false 00:23:17.268 }, 00:23:17.268 "memory_domains": [ 00:23:17.268 { 00:23:17.268 "dma_device_id": "system", 00:23:17.268 "dma_device_type": 1 00:23:17.268 }, 00:23:17.268 { 00:23:17.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.268 "dma_device_type": 2 00:23:17.268 } 00:23:17.268 ], 00:23:17.268 "driver_specific": { 00:23:17.268 "passthru": { 00:23:17.268 "name": "pt4", 00:23:17.268 "base_bdev_name": "malloc4" 00:23:17.268 } 00:23:17.268 } 00:23:17.268 }' 00:23:17.268 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.525 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:17.525 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:17.525 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.525 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.525 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.525 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.525 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.782 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.782 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.782 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.782 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.782 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:17.782 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:23:18.039 [2024-07-25 18:50:18.465589] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.039 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=7f03ac0f-1728-4009-93bb-d9a817e0033d 00:23:18.039 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 7f03ac0f-1728-4009-93bb-d9a817e0033d ']' 00:23:18.039 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:18.295 [2024-07-25 18:50:18.745408] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.295 [2024-07-25 18:50:18.745576] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:18.295 [2024-07-25 18:50:18.745816] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:18.295 [2024-07-25 18:50:18.745998] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:18.295 [2024-07-25 18:50:18.746076] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:23:18.295 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.295 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:23:18.552 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:23:18.552 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:23:18.552 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.552 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:18.810 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:18.810 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:19.066 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:19.066 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:19.324 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:19.324 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:19.324 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:19.324 18:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:19.581 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:23:19.581 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:19.581 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:19.581 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:19.581 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.581 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.582 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.582 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.582 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.582 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.582 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:19.582 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:19.582 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:19.839 [2024-07-25 18:50:20.345628] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:19.839 [2024-07-25 18:50:20.348029] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:19.839 [2024-07-25 18:50:20.348212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:19.839 [2024-07-25 18:50:20.348273] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:19.839 [2024-07-25 18:50:20.348414] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:19.839 [2024-07-25 18:50:20.348557] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:19.839 [2024-07-25 18:50:20.348722] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:19.839 [2024-07-25 18:50:20.348877] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:19.839 [2024-07-25 18:50:20.348983] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:19.839 [2024-07-25 18:50:20.349019] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:23:19.839 request: 00:23:19.839 { 00:23:19.839 "name": "raid_bdev1", 00:23:19.839 "raid_level": "raid0", 00:23:19.839 "base_bdevs": [ 00:23:19.839 "malloc1", 00:23:19.839 "malloc2", 00:23:19.839 "malloc3", 00:23:19.839 "malloc4" 00:23:19.839 ], 00:23:19.839 "strip_size_kb": 64, 00:23:19.839 "superblock": false, 00:23:19.839 "method": "bdev_raid_create", 00:23:19.839 "req_id": 1 00:23:19.839 } 00:23:19.839 Got JSON-RPC error response 00:23:19.839 response: 00:23:19.839 { 00:23:19.839 "code": -17, 00:23:19.839 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:19.839 } 00:23:19.839 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:19.839 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.839 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.839 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.839 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.839 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:23:20.096 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:23:20.096 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:23:20.096 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:20.354 [2024-07-25 18:50:20.793680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:20.354 [2024-07-25 18:50:20.793928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.354 [2024-07-25 18:50:20.793992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:20.354 [2024-07-25 18:50:20.794124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.354 [2024-07-25 18:50:20.796717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.354 [2024-07-25 18:50:20.796889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:20.354 [2024-07-25 18:50:20.797082] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:20.355 [2024-07-25 18:50:20.797231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:20.355 pt1 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.355 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.612 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.612 "name": "raid_bdev1", 00:23:20.612 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:20.612 "strip_size_kb": 64, 00:23:20.612 "state": "configuring", 00:23:20.612 "raid_level": "raid0", 00:23:20.612 "superblock": true, 00:23:20.612 "num_base_bdevs": 4, 00:23:20.612 "num_base_bdevs_discovered": 1, 00:23:20.612 "num_base_bdevs_operational": 4, 00:23:20.612 "base_bdevs_list": [ 00:23:20.612 { 00:23:20.612 "name": "pt1", 00:23:20.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.612 "is_configured": true, 00:23:20.612 "data_offset": 2048, 00:23:20.612 "data_size": 63488 00:23:20.612 }, 00:23:20.612 { 00:23:20.612 "name": null, 00:23:20.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.612 "is_configured": false, 00:23:20.612 "data_offset": 2048, 00:23:20.612 "data_size": 63488 00:23:20.612 }, 00:23:20.612 { 00:23:20.612 "name": null, 00:23:20.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:20.612 "is_configured": false, 00:23:20.612 "data_offset": 2048, 00:23:20.612 "data_size": 63488 00:23:20.612 }, 00:23:20.612 { 00:23:20.612 "name": null, 00:23:20.612 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:20.612 "is_configured": false, 00:23:20.612 "data_offset": 2048, 00:23:20.612 "data_size": 63488 00:23:20.612 } 00:23:20.612 ] 00:23:20.612 }' 00:23:20.612 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.612 18:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.177 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:23:21.177 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:21.434 [2024-07-25 18:50:21.789907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:21.434 [2024-07-25 18:50:21.790188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.434 [2024-07-25 18:50:21.790318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:21.434 [2024-07-25 18:50:21.790448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.434 [2024-07-25 18:50:21.791070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.434 [2024-07-25 18:50:21.791213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:21.434 [2024-07-25 18:50:21.791439] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:21.434 [2024-07-25 18:50:21.791547] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:21.434 pt2 00:23:21.434 18:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:21.691 [2024-07-25 18:50:22.046460] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:21.691 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:21.691 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.691 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:21.691 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:21.691 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.691 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.692 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.692 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.692 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.692 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.692 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.692 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.949 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.949 "name": "raid_bdev1", 00:23:21.949 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:21.949 "strip_size_kb": 64, 00:23:21.949 "state": "configuring", 00:23:21.949 "raid_level": "raid0", 00:23:21.949 "superblock": true, 00:23:21.949 "num_base_bdevs": 4, 00:23:21.949 "num_base_bdevs_discovered": 1, 00:23:21.949 "num_base_bdevs_operational": 4, 00:23:21.949 "base_bdevs_list": [ 00:23:21.949 { 00:23:21.949 "name": "pt1", 00:23:21.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:21.949 "is_configured": true, 00:23:21.949 "data_offset": 2048, 00:23:21.949 "data_size": 63488 00:23:21.949 }, 00:23:21.949 { 00:23:21.949 "name": null, 00:23:21.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:21.949 "is_configured": false, 00:23:21.949 "data_offset": 2048, 00:23:21.949 "data_size": 63488 00:23:21.949 }, 00:23:21.949 { 00:23:21.949 "name": null, 00:23:21.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:21.949 "is_configured": false, 00:23:21.949 "data_offset": 2048, 00:23:21.949 "data_size": 63488 00:23:21.949 }, 00:23:21.949 { 00:23:21.949 "name": null, 00:23:21.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:21.949 "is_configured": false, 00:23:21.949 "data_offset": 2048, 00:23:21.949 "data_size": 63488 00:23:21.949 } 00:23:21.949 ] 00:23:21.949 }' 00:23:21.949 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.949 18:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.514 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:23:22.514 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:22.514 18:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:22.772 [2024-07-25 18:50:23.090587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:22.772 [2024-07-25 18:50:23.090820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.772 [2024-07-25 18:50:23.090893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:22.772 [2024-07-25 18:50:23.091024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.772 [2024-07-25 18:50:23.091570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.772 [2024-07-25 18:50:23.091709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:22.772 [2024-07-25 18:50:23.091892] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:22.772 [2024-07-25 18:50:23.091989] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:22.772 pt2 00:23:22.772 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:22.772 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:22.772 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:22.772 [2024-07-25 18:50:23.342695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:22.772 [2024-07-25 18:50:23.342927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.772 [2024-07-25 18:50:23.342992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:22.772 [2024-07-25 18:50:23.343120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.772 [2024-07-25 18:50:23.343707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.772 [2024-07-25 18:50:23.343852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:22.772 [2024-07-25 18:50:23.344044] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:22.772 [2024-07-25 18:50:23.344183] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:22.772 pt3 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:23.029 [2024-07-25 18:50:23.542659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:23.029 [2024-07-25 18:50:23.542834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.029 [2024-07-25 18:50:23.542892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:23.029 [2024-07-25 18:50:23.543009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.029 [2024-07-25 18:50:23.543493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.029 [2024-07-25 18:50:23.543639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:23.029 [2024-07-25 18:50:23.543804] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:23.029 [2024-07-25 18:50:23.543861] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:23.029 [2024-07-25 18:50:23.544077] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:23:23.029 [2024-07-25 18:50:23.544163] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:23.029 [2024-07-25 18:50:23.544277] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:23.029 [2024-07-25 18:50:23.544672] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:23:23.029 [2024-07-25 18:50:23.544769] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:23:23.029 [2024-07-25 18:50:23.544963] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.029 pt4 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.029 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.285 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.285 "name": "raid_bdev1", 00:23:23.285 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:23.285 "strip_size_kb": 64, 00:23:23.285 "state": "online", 00:23:23.285 "raid_level": "raid0", 00:23:23.285 "superblock": true, 00:23:23.285 "num_base_bdevs": 4, 00:23:23.285 "num_base_bdevs_discovered": 4, 00:23:23.285 "num_base_bdevs_operational": 4, 00:23:23.285 "base_bdevs_list": [ 00:23:23.285 { 00:23:23.285 "name": "pt1", 00:23:23.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:23.285 "is_configured": true, 00:23:23.285 "data_offset": 2048, 00:23:23.285 "data_size": 63488 00:23:23.285 }, 00:23:23.285 { 00:23:23.285 "name": "pt2", 00:23:23.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.285 "is_configured": true, 00:23:23.285 "data_offset": 2048, 00:23:23.285 "data_size": 63488 00:23:23.285 }, 00:23:23.285 { 00:23:23.285 "name": "pt3", 00:23:23.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:23.285 "is_configured": true, 00:23:23.285 "data_offset": 2048, 00:23:23.285 "data_size": 63488 00:23:23.285 }, 00:23:23.285 { 00:23:23.285 "name": "pt4", 00:23:23.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:23.285 "is_configured": true, 00:23:23.285 "data_offset": 2048, 00:23:23.285 "data_size": 63488 00:23:23.285 } 00:23:23.285 ] 00:23:23.285 }' 00:23:23.285 18:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.285 18:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:23.848 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:24.105 [2024-07-25 18:50:24.637192] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.105 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:24.105 "name": "raid_bdev1", 00:23:24.105 "aliases": [ 00:23:24.105 "7f03ac0f-1728-4009-93bb-d9a817e0033d" 00:23:24.105 ], 00:23:24.105 "product_name": "Raid Volume", 00:23:24.105 "block_size": 512, 00:23:24.105 "num_blocks": 253952, 00:23:24.105 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:24.106 "assigned_rate_limits": { 00:23:24.106 "rw_ios_per_sec": 0, 00:23:24.106 "rw_mbytes_per_sec": 0, 00:23:24.106 "r_mbytes_per_sec": 0, 00:23:24.106 "w_mbytes_per_sec": 0 00:23:24.106 }, 00:23:24.106 "claimed": false, 00:23:24.106 "zoned": false, 00:23:24.106 "supported_io_types": { 00:23:24.106 "read": true, 00:23:24.106 "write": true, 00:23:24.106 "unmap": true, 00:23:24.106 "flush": true, 00:23:24.106 "reset": true, 00:23:24.106 "nvme_admin": false, 00:23:24.106 "nvme_io": false, 00:23:24.106 "nvme_io_md": false, 00:23:24.106 "write_zeroes": true, 00:23:24.106 "zcopy": false, 00:23:24.106 "get_zone_info": false, 00:23:24.106 "zone_management": false, 00:23:24.106 "zone_append": false, 00:23:24.106 "compare": false, 00:23:24.106 "compare_and_write": false, 00:23:24.106 "abort": false, 00:23:24.106 "seek_hole": false, 00:23:24.106 "seek_data": false, 00:23:24.106 "copy": false, 00:23:24.106 "nvme_iov_md": false 00:23:24.106 }, 00:23:24.106 "memory_domains": [ 00:23:24.106 { 00:23:24.106 "dma_device_id": "system", 00:23:24.106 "dma_device_type": 1 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.106 "dma_device_type": 2 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "dma_device_id": "system", 00:23:24.106 "dma_device_type": 1 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.106 "dma_device_type": 2 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "dma_device_id": "system", 00:23:24.106 "dma_device_type": 1 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.106 "dma_device_type": 2 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "dma_device_id": "system", 00:23:24.106 "dma_device_type": 1 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.106 "dma_device_type": 2 00:23:24.106 } 00:23:24.106 ], 00:23:24.106 "driver_specific": { 00:23:24.106 "raid": { 00:23:24.106 "uuid": "7f03ac0f-1728-4009-93bb-d9a817e0033d", 00:23:24.106 "strip_size_kb": 64, 00:23:24.106 "state": "online", 00:23:24.106 "raid_level": "raid0", 00:23:24.106 "superblock": true, 00:23:24.106 "num_base_bdevs": 4, 00:23:24.106 "num_base_bdevs_discovered": 4, 00:23:24.106 "num_base_bdevs_operational": 4, 00:23:24.106 "base_bdevs_list": [ 00:23:24.106 { 00:23:24.106 "name": "pt1", 00:23:24.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:24.106 "is_configured": true, 00:23:24.106 "data_offset": 2048, 00:23:24.106 "data_size": 63488 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "name": "pt2", 00:23:24.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:24.106 "is_configured": true, 00:23:24.106 "data_offset": 2048, 00:23:24.106 "data_size": 63488 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "name": "pt3", 00:23:24.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:24.106 "is_configured": true, 00:23:24.106 "data_offset": 2048, 00:23:24.106 "data_size": 63488 00:23:24.106 }, 00:23:24.106 { 00:23:24.106 "name": "pt4", 00:23:24.106 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:24.106 "is_configured": true, 00:23:24.106 "data_offset": 2048, 00:23:24.106 "data_size": 63488 00:23:24.106 } 00:23:24.106 ] 00:23:24.106 } 00:23:24.106 } 00:23:24.106 }' 00:23:24.106 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:24.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:24.363 pt2 00:23:24.363 pt3 00:23:24.363 pt4' 00:23:24.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:24.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:24.363 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:24.621 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:24.621 "name": "pt1", 00:23:24.621 "aliases": [ 00:23:24.621 "00000000-0000-0000-0000-000000000001" 00:23:24.621 ], 00:23:24.621 "product_name": "passthru", 00:23:24.621 "block_size": 512, 00:23:24.621 "num_blocks": 65536, 00:23:24.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:24.621 "assigned_rate_limits": { 00:23:24.621 "rw_ios_per_sec": 0, 00:23:24.621 "rw_mbytes_per_sec": 0, 00:23:24.621 "r_mbytes_per_sec": 0, 00:23:24.621 "w_mbytes_per_sec": 0 00:23:24.621 }, 00:23:24.621 "claimed": true, 00:23:24.621 "claim_type": "exclusive_write", 00:23:24.621 "zoned": false, 00:23:24.621 "supported_io_types": { 00:23:24.621 "read": true, 00:23:24.621 "write": true, 00:23:24.621 "unmap": true, 00:23:24.621 "flush": true, 00:23:24.621 "reset": true, 00:23:24.621 "nvme_admin": false, 00:23:24.621 "nvme_io": false, 00:23:24.621 "nvme_io_md": false, 00:23:24.621 "write_zeroes": true, 00:23:24.621 "zcopy": true, 00:23:24.621 "get_zone_info": false, 00:23:24.621 "zone_management": false, 00:23:24.621 "zone_append": false, 00:23:24.621 "compare": false, 00:23:24.621 "compare_and_write": false, 00:23:24.621 "abort": true, 00:23:24.621 "seek_hole": false, 00:23:24.621 "seek_data": false, 00:23:24.621 "copy": true, 00:23:24.621 "nvme_iov_md": false 00:23:24.621 }, 00:23:24.621 "memory_domains": [ 00:23:24.621 { 00:23:24.621 "dma_device_id": "system", 00:23:24.621 "dma_device_type": 1 00:23:24.621 }, 00:23:24.621 { 00:23:24.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.621 "dma_device_type": 2 00:23:24.621 } 00:23:24.621 ], 00:23:24.621 "driver_specific": { 00:23:24.621 "passthru": { 00:23:24.621 "name": "pt1", 00:23:24.621 "base_bdev_name": "malloc1" 00:23:24.621 } 00:23:24.621 } 00:23:24.621 }' 00:23:24.621 18:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.621 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.621 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:24.621 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.621 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.621 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:24.621 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.878 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.878 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:24.878 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.878 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.878 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:24.879 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:24.879 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:24.879 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:25.136 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:25.136 "name": "pt2", 00:23:25.136 "aliases": [ 00:23:25.136 "00000000-0000-0000-0000-000000000002" 00:23:25.136 ], 00:23:25.136 "product_name": "passthru", 00:23:25.136 "block_size": 512, 00:23:25.136 "num_blocks": 65536, 00:23:25.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:25.136 "assigned_rate_limits": { 00:23:25.136 "rw_ios_per_sec": 0, 00:23:25.136 "rw_mbytes_per_sec": 0, 00:23:25.136 "r_mbytes_per_sec": 0, 00:23:25.136 "w_mbytes_per_sec": 0 00:23:25.136 }, 00:23:25.136 "claimed": true, 00:23:25.136 "claim_type": "exclusive_write", 00:23:25.136 "zoned": false, 00:23:25.136 "supported_io_types": { 00:23:25.136 "read": true, 00:23:25.136 "write": true, 00:23:25.136 "unmap": true, 00:23:25.136 "flush": true, 00:23:25.136 "reset": true, 00:23:25.136 "nvme_admin": false, 00:23:25.136 "nvme_io": false, 00:23:25.136 "nvme_io_md": false, 00:23:25.136 "write_zeroes": true, 00:23:25.136 "zcopy": true, 00:23:25.136 "get_zone_info": false, 00:23:25.136 "zone_management": false, 00:23:25.136 "zone_append": false, 00:23:25.136 "compare": false, 00:23:25.136 "compare_and_write": false, 00:23:25.136 "abort": true, 00:23:25.136 "seek_hole": false, 00:23:25.136 "seek_data": false, 00:23:25.136 "copy": true, 00:23:25.136 "nvme_iov_md": false 00:23:25.136 }, 00:23:25.136 "memory_domains": [ 00:23:25.136 { 00:23:25.136 "dma_device_id": "system", 00:23:25.136 "dma_device_type": 1 00:23:25.136 }, 00:23:25.136 { 00:23:25.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.136 "dma_device_type": 2 00:23:25.136 } 00:23:25.136 ], 00:23:25.136 "driver_specific": { 00:23:25.136 "passthru": { 00:23:25.136 "name": "pt2", 00:23:25.136 "base_bdev_name": "malloc2" 00:23:25.136 } 00:23:25.136 } 00:23:25.136 }' 00:23:25.136 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.136 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.136 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:25.136 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:25.394 18:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:25.652 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:25.652 "name": "pt3", 00:23:25.652 "aliases": [ 00:23:25.652 "00000000-0000-0000-0000-000000000003" 00:23:25.652 ], 00:23:25.652 "product_name": "passthru", 00:23:25.652 "block_size": 512, 00:23:25.652 "num_blocks": 65536, 00:23:25.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:25.652 "assigned_rate_limits": { 00:23:25.652 "rw_ios_per_sec": 0, 00:23:25.652 "rw_mbytes_per_sec": 0, 00:23:25.652 "r_mbytes_per_sec": 0, 00:23:25.652 "w_mbytes_per_sec": 0 00:23:25.652 }, 00:23:25.652 "claimed": true, 00:23:25.652 "claim_type": "exclusive_write", 00:23:25.652 "zoned": false, 00:23:25.652 "supported_io_types": { 00:23:25.652 "read": true, 00:23:25.652 "write": true, 00:23:25.652 "unmap": true, 00:23:25.652 "flush": true, 00:23:25.652 "reset": true, 00:23:25.652 "nvme_admin": false, 00:23:25.652 "nvme_io": false, 00:23:25.652 "nvme_io_md": false, 00:23:25.652 "write_zeroes": true, 00:23:25.652 "zcopy": true, 00:23:25.652 "get_zone_info": false, 00:23:25.652 "zone_management": false, 00:23:25.652 "zone_append": false, 00:23:25.652 "compare": false, 00:23:25.652 "compare_and_write": false, 00:23:25.652 "abort": true, 00:23:25.652 "seek_hole": false, 00:23:25.652 "seek_data": false, 00:23:25.652 "copy": true, 00:23:25.652 "nvme_iov_md": false 00:23:25.652 }, 00:23:25.652 "memory_domains": [ 00:23:25.652 { 00:23:25.652 "dma_device_id": "system", 00:23:25.652 "dma_device_type": 1 00:23:25.652 }, 00:23:25.652 { 00:23:25.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.652 "dma_device_type": 2 00:23:25.652 } 00:23:25.652 ], 00:23:25.652 "driver_specific": { 00:23:25.652 "passthru": { 00:23:25.652 "name": "pt3", 00:23:25.652 "base_bdev_name": "malloc3" 00:23:25.652 } 00:23:25.652 } 00:23:25.652 }' 00:23:25.652 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.910 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.910 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:25.910 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.910 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.910 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:25.910 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.910 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.168 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:26.168 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.168 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.168 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:26.168 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:26.168 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:26.168 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:26.426 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:26.426 "name": "pt4", 00:23:26.426 "aliases": [ 00:23:26.426 "00000000-0000-0000-0000-000000000004" 00:23:26.426 ], 00:23:26.426 "product_name": "passthru", 00:23:26.426 "block_size": 512, 00:23:26.426 "num_blocks": 65536, 00:23:26.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:26.426 "assigned_rate_limits": { 00:23:26.426 "rw_ios_per_sec": 0, 00:23:26.426 "rw_mbytes_per_sec": 0, 00:23:26.426 "r_mbytes_per_sec": 0, 00:23:26.426 "w_mbytes_per_sec": 0 00:23:26.426 }, 00:23:26.426 "claimed": true, 00:23:26.426 "claim_type": "exclusive_write", 00:23:26.426 "zoned": false, 00:23:26.426 "supported_io_types": { 00:23:26.426 "read": true, 00:23:26.426 "write": true, 00:23:26.426 "unmap": true, 00:23:26.426 "flush": true, 00:23:26.426 "reset": true, 00:23:26.426 "nvme_admin": false, 00:23:26.426 "nvme_io": false, 00:23:26.426 "nvme_io_md": false, 00:23:26.426 "write_zeroes": true, 00:23:26.426 "zcopy": true, 00:23:26.426 "get_zone_info": false, 00:23:26.426 "zone_management": false, 00:23:26.426 "zone_append": false, 00:23:26.426 "compare": false, 00:23:26.426 "compare_and_write": false, 00:23:26.426 "abort": true, 00:23:26.426 "seek_hole": false, 00:23:26.426 "seek_data": false, 00:23:26.426 "copy": true, 00:23:26.426 "nvme_iov_md": false 00:23:26.426 }, 00:23:26.426 "memory_domains": [ 00:23:26.426 { 00:23:26.426 "dma_device_id": "system", 00:23:26.426 "dma_device_type": 1 00:23:26.426 }, 00:23:26.426 { 00:23:26.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.426 "dma_device_type": 2 00:23:26.426 } 00:23:26.426 ], 00:23:26.426 "driver_specific": { 00:23:26.426 "passthru": { 00:23:26.426 "name": "pt4", 00:23:26.426 "base_bdev_name": "malloc4" 00:23:26.426 } 00:23:26.426 } 00:23:26.426 }' 00:23:26.426 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.426 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:26.426 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:26.426 18:50:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:26.685 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:23:26.943 [2024-07-25 18:50:27.454855] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 7f03ac0f-1728-4009-93bb-d9a817e0033d '!=' 7f03ac0f-1728-4009-93bb-d9a817e0033d ']' 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 135855 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 135855 ']' 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 135855 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 135855 00:23:26.943 killing process with pid 135855 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 135855' 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 135855 00:23:26.943 [2024-07-25 18:50:27.503225] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:26.943 18:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 135855 00:23:26.943 [2024-07-25 18:50:27.503311] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.943 [2024-07-25 18:50:27.503389] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.943 [2024-07-25 18:50:27.503399] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:23:27.509 [2024-07-25 18:50:27.837752] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:28.879 ************************************ 00:23:28.879 END TEST raid_superblock_test 00:23:28.879 ************************************ 00:23:28.879 18:50:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:23:28.879 00:23:28.879 real 0m17.131s 00:23:28.879 user 0m29.752s 00:23:28.879 sys 0m2.856s 00:23:28.879 18:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.879 18:50:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.879 18:50:29 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:23:28.879 18:50:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:28.879 18:50:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.879 18:50:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:28.879 ************************************ 00:23:28.879 START TEST raid_read_error_test 00:23:28.879 ************************************ 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev4 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.qtGZxZ21FK 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=136397 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 136397 /var/tmp/spdk-raid.sock 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 136397 ']' 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:28.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.879 18:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.879 [2024-07-25 18:50:29.205903] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:28.879 [2024-07-25 18:50:29.206400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136397 ] 00:23:28.879 [2024-07-25 18:50:29.394327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.136 [2024-07-25 18:50:29.704778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.701 [2024-07-25 18:50:29.977450] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:29.701 18:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.701 18:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:23:29.701 18:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:29.701 18:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:29.958 BaseBdev1_malloc 00:23:29.958 18:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:30.215 true 00:23:30.215 18:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:30.473 [2024-07-25 18:50:30.879353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:30.473 [2024-07-25 18:50:30.879629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.473 [2024-07-25 18:50:30.879706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:30.473 [2024-07-25 18:50:30.879914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.473 [2024-07-25 18:50:30.882618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.473 [2024-07-25 18:50:30.882778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:30.473 BaseBdev1 00:23:30.473 18:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:30.473 18:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:30.730 BaseBdev2_malloc 00:23:30.731 18:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:30.988 true 00:23:30.988 18:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:30.988 [2024-07-25 18:50:31.520060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:30.988 [2024-07-25 18:50:31.520359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.988 [2024-07-25 18:50:31.520441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:30.988 [2024-07-25 18:50:31.520655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.988 [2024-07-25 18:50:31.523398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.988 [2024-07-25 18:50:31.523576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:30.988 BaseBdev2 00:23:30.988 18:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:30.988 18:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:31.264 BaseBdev3_malloc 00:23:31.264 18:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:31.549 true 00:23:31.549 18:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:31.816 [2024-07-25 18:50:32.237300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:31.816 [2024-07-25 18:50:32.237625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.816 [2024-07-25 18:50:32.237703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:31.816 [2024-07-25 18:50:32.237825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.816 [2024-07-25 18:50:32.240605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.816 [2024-07-25 18:50:32.240787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:31.816 BaseBdev3 00:23:31.816 18:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:31.816 18:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:32.072 BaseBdev4_malloc 00:23:32.072 18:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:32.329 true 00:23:32.329 18:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:32.329 [2024-07-25 18:50:32.849345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:32.329 [2024-07-25 18:50:32.849672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.329 [2024-07-25 18:50:32.849784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:32.329 [2024-07-25 18:50:32.850039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.329 [2024-07-25 18:50:32.852737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.329 [2024-07-25 18:50:32.852927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:32.329 BaseBdev4 00:23:32.329 18:50:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:32.586 [2024-07-25 18:50:33.101461] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:32.586 [2024-07-25 18:50:33.104004] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:32.586 [2024-07-25 18:50:33.104250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:32.586 [2024-07-25 18:50:33.104343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:32.586 [2024-07-25 18:50:33.104682] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:23:32.586 [2024-07-25 18:50:33.104749] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:32.586 [2024-07-25 18:50:33.105025] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:32.586 [2024-07-25 18:50:33.105535] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:23:32.586 [2024-07-25 18:50:33.105647] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:23:32.586 [2024-07-25 18:50:33.105982] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.586 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.844 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:32.844 "name": "raid_bdev1", 00:23:32.844 "uuid": "3f6aa6d8-77ec-40b9-9fd7-940d720489c1", 00:23:32.844 "strip_size_kb": 64, 00:23:32.844 "state": "online", 00:23:32.844 "raid_level": "raid0", 00:23:32.844 "superblock": true, 00:23:32.844 "num_base_bdevs": 4, 00:23:32.844 "num_base_bdevs_discovered": 4, 00:23:32.844 "num_base_bdevs_operational": 4, 00:23:32.844 "base_bdevs_list": [ 00:23:32.844 { 00:23:32.844 "name": "BaseBdev1", 00:23:32.844 "uuid": "50e65491-7ef8-523a-89e4-8041fe7c85ab", 00:23:32.844 "is_configured": true, 00:23:32.844 "data_offset": 2048, 00:23:32.844 "data_size": 63488 00:23:32.844 }, 00:23:32.844 { 00:23:32.844 "name": "BaseBdev2", 00:23:32.844 "uuid": "fbd0e94c-d669-5b9d-a4b3-51e38d0ace76", 00:23:32.844 "is_configured": true, 00:23:32.844 "data_offset": 2048, 00:23:32.844 "data_size": 63488 00:23:32.844 }, 00:23:32.844 { 00:23:32.844 "name": "BaseBdev3", 00:23:32.844 "uuid": "f9217314-896f-5581-8433-783e23faef58", 00:23:32.844 "is_configured": true, 00:23:32.844 "data_offset": 2048, 00:23:32.844 "data_size": 63488 00:23:32.844 }, 00:23:32.844 { 00:23:32.844 "name": "BaseBdev4", 00:23:32.844 "uuid": "3d64ebb6-a9e8-5cc8-9e32-410903123e7b", 00:23:32.844 "is_configured": true, 00:23:32.844 "data_offset": 2048, 00:23:32.844 "data_size": 63488 00:23:32.844 } 00:23:32.844 ] 00:23:32.844 }' 00:23:32.844 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:32.844 18:50:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:23:33.406 18:50:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:33.406 [2024-07-25 18:50:33.895702] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:34.338 18:50:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.595 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.853 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.853 "name": "raid_bdev1", 00:23:34.853 "uuid": "3f6aa6d8-77ec-40b9-9fd7-940d720489c1", 00:23:34.853 "strip_size_kb": 64, 00:23:34.853 "state": "online", 00:23:34.853 "raid_level": "raid0", 00:23:34.853 "superblock": true, 00:23:34.853 "num_base_bdevs": 4, 00:23:34.853 "num_base_bdevs_discovered": 4, 00:23:34.853 "num_base_bdevs_operational": 4, 00:23:34.853 "base_bdevs_list": [ 00:23:34.853 { 00:23:34.853 "name": "BaseBdev1", 00:23:34.853 "uuid": "50e65491-7ef8-523a-89e4-8041fe7c85ab", 00:23:34.853 "is_configured": true, 00:23:34.853 "data_offset": 2048, 00:23:34.853 "data_size": 63488 00:23:34.853 }, 00:23:34.853 { 00:23:34.853 "name": "BaseBdev2", 00:23:34.853 "uuid": "fbd0e94c-d669-5b9d-a4b3-51e38d0ace76", 00:23:34.853 "is_configured": true, 00:23:34.853 "data_offset": 2048, 00:23:34.853 "data_size": 63488 00:23:34.853 }, 00:23:34.853 { 00:23:34.853 "name": "BaseBdev3", 00:23:34.853 "uuid": "f9217314-896f-5581-8433-783e23faef58", 00:23:34.853 "is_configured": true, 00:23:34.853 "data_offset": 2048, 00:23:34.853 "data_size": 63488 00:23:34.853 }, 00:23:34.853 { 00:23:34.853 "name": "BaseBdev4", 00:23:34.853 "uuid": "3d64ebb6-a9e8-5cc8-9e32-410903123e7b", 00:23:34.853 "is_configured": true, 00:23:34.853 "data_offset": 2048, 00:23:34.853 "data_size": 63488 00:23:34.853 } 00:23:34.853 ] 00:23:34.853 }' 00:23:34.853 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.853 18:50:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.424 18:50:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:35.681 [2024-07-25 18:50:36.016094] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.681 [2024-07-25 18:50:36.016418] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.681 [2024-07-25 18:50:36.019330] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.681 [2024-07-25 18:50:36.019522] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.681 [2024-07-25 18:50:36.019604] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.681 [2024-07-25 18:50:36.019679] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:23:35.681 0 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 136397 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 136397 ']' 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 136397 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136397 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136397' 00:23:35.681 killing process with pid 136397 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 136397 00:23:35.681 [2024-07-25 18:50:36.071628] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:35.681 18:50:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 136397 00:23:35.938 [2024-07-25 18:50:36.434306] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.qtGZxZ21FK 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:23:37.831 ************************************ 00:23:37.831 END TEST raid_read_error_test 00:23:37.831 ************************************ 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:23:37.831 00:23:37.831 real 0m8.903s 00:23:37.831 user 0m12.699s 00:23:37.831 sys 0m1.420s 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:37.831 18:50:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.831 18:50:38 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:23:37.831 18:50:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:37.831 18:50:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:37.831 18:50:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:37.831 ************************************ 00:23:37.831 START TEST raid_write_error_test 00:23:37.831 ************************************ 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev4 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.ScVrvPuH58 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=136619 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 136619 /var/tmp/spdk-raid.sock 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 136619 ']' 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:37.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.831 18:50:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.831 [2024-07-25 18:50:38.178723] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:37.831 [2024-07-25 18:50:38.179226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136619 ] 00:23:37.831 [2024-07-25 18:50:38.366316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.088 [2024-07-25 18:50:38.619376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.346 [2024-07-25 18:50:38.894247] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.601 18:50:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.601 18:50:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:23:38.601 18:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:38.601 18:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:38.858 BaseBdev1_malloc 00:23:38.858 18:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:39.115 true 00:23:39.115 18:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:39.372 [2024-07-25 18:50:39.791983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:39.372 [2024-07-25 18:50:39.792241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.372 [2024-07-25 18:50:39.792312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:39.372 [2024-07-25 18:50:39.792416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.372 [2024-07-25 18:50:39.795130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.372 [2024-07-25 18:50:39.795288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:39.372 BaseBdev1 00:23:39.372 18:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:39.372 18:50:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:39.629 BaseBdev2_malloc 00:23:39.629 18:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:39.886 true 00:23:39.886 18:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:40.143 [2024-07-25 18:50:40.475284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:40.143 [2024-07-25 18:50:40.475596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.143 [2024-07-25 18:50:40.475678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:40.143 [2024-07-25 18:50:40.475933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.143 [2024-07-25 18:50:40.478683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.144 [2024-07-25 18:50:40.478857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:40.144 BaseBdev2 00:23:40.144 18:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:40.144 18:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:40.144 BaseBdev3_malloc 00:23:40.401 18:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:40.401 true 00:23:40.401 18:50:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:40.659 [2024-07-25 18:50:41.143876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:40.659 [2024-07-25 18:50:41.144215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.659 [2024-07-25 18:50:41.144294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:40.659 [2024-07-25 18:50:41.144411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.659 [2024-07-25 18:50:41.147143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.659 [2024-07-25 18:50:41.147328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:40.659 BaseBdev3 00:23:40.659 18:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:40.659 18:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:40.916 BaseBdev4_malloc 00:23:40.916 18:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:41.174 true 00:23:41.174 18:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:41.432 [2024-07-25 18:50:41.763182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:41.432 [2024-07-25 18:50:41.763451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.432 [2024-07-25 18:50:41.763552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:41.432 [2024-07-25 18:50:41.763754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.432 [2024-07-25 18:50:41.766505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.432 [2024-07-25 18:50:41.766667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:41.432 BaseBdev4 00:23:41.432 18:50:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:41.432 [2024-07-25 18:50:41.991422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:41.432 [2024-07-25 18:50:41.993447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:41.432 [2024-07-25 18:50:41.993641] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:41.432 [2024-07-25 18:50:41.993728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:41.432 [2024-07-25 18:50:41.994037] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:23:41.432 [2024-07-25 18:50:41.994141] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:41.432 [2024-07-25 18:50:41.994282] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:41.432 [2024-07-25 18:50:41.994683] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:23:41.432 [2024-07-25 18:50:41.994788] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:23:41.433 [2024-07-25 18:50:41.995023] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.691 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.691 "name": "raid_bdev1", 00:23:41.692 "uuid": "04e3e4f5-9742-4126-91b9-f44ac5d0ab8e", 00:23:41.692 "strip_size_kb": 64, 00:23:41.692 "state": "online", 00:23:41.692 "raid_level": "raid0", 00:23:41.692 "superblock": true, 00:23:41.692 "num_base_bdevs": 4, 00:23:41.692 "num_base_bdevs_discovered": 4, 00:23:41.692 "num_base_bdevs_operational": 4, 00:23:41.692 "base_bdevs_list": [ 00:23:41.692 { 00:23:41.692 "name": "BaseBdev1", 00:23:41.692 "uuid": "539ba6c7-debf-5194-9a3b-a7278a3aa482", 00:23:41.692 "is_configured": true, 00:23:41.692 "data_offset": 2048, 00:23:41.692 "data_size": 63488 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "name": "BaseBdev2", 00:23:41.692 "uuid": "baaeeed3-b4d1-5ca1-8a0d-11c3af76e36f", 00:23:41.692 "is_configured": true, 00:23:41.692 "data_offset": 2048, 00:23:41.692 "data_size": 63488 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "name": "BaseBdev3", 00:23:41.692 "uuid": "d33ffaf1-6219-53ea-a641-a4e13b2df4e7", 00:23:41.692 "is_configured": true, 00:23:41.692 "data_offset": 2048, 00:23:41.692 "data_size": 63488 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "name": "BaseBdev4", 00:23:41.692 "uuid": "72dc1407-b785-5ec3-9b00-2e7a1a41855f", 00:23:41.692 "is_configured": true, 00:23:41.692 "data_offset": 2048, 00:23:41.692 "data_size": 63488 00:23:41.692 } 00:23:41.692 ] 00:23:41.692 }' 00:23:41.692 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.692 18:50:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.281 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:23:42.281 18:50:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:42.539 [2024-07-25 18:50:42.857529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.474 18:50:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.733 18:50:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.733 "name": "raid_bdev1", 00:23:43.733 "uuid": "04e3e4f5-9742-4126-91b9-f44ac5d0ab8e", 00:23:43.733 "strip_size_kb": 64, 00:23:43.733 "state": "online", 00:23:43.733 "raid_level": "raid0", 00:23:43.733 "superblock": true, 00:23:43.733 "num_base_bdevs": 4, 00:23:43.733 "num_base_bdevs_discovered": 4, 00:23:43.733 "num_base_bdevs_operational": 4, 00:23:43.733 "base_bdevs_list": [ 00:23:43.733 { 00:23:43.733 "name": "BaseBdev1", 00:23:43.733 "uuid": "539ba6c7-debf-5194-9a3b-a7278a3aa482", 00:23:43.733 "is_configured": true, 00:23:43.733 "data_offset": 2048, 00:23:43.733 "data_size": 63488 00:23:43.733 }, 00:23:43.733 { 00:23:43.733 "name": "BaseBdev2", 00:23:43.733 "uuid": "baaeeed3-b4d1-5ca1-8a0d-11c3af76e36f", 00:23:43.733 "is_configured": true, 00:23:43.733 "data_offset": 2048, 00:23:43.733 "data_size": 63488 00:23:43.733 }, 00:23:43.733 { 00:23:43.733 "name": "BaseBdev3", 00:23:43.733 "uuid": "d33ffaf1-6219-53ea-a641-a4e13b2df4e7", 00:23:43.733 "is_configured": true, 00:23:43.733 "data_offset": 2048, 00:23:43.733 "data_size": 63488 00:23:43.733 }, 00:23:43.733 { 00:23:43.733 "name": "BaseBdev4", 00:23:43.733 "uuid": "72dc1407-b785-5ec3-9b00-2e7a1a41855f", 00:23:43.733 "is_configured": true, 00:23:43.733 "data_offset": 2048, 00:23:43.733 "data_size": 63488 00:23:43.733 } 00:23:43.733 ] 00:23:43.733 }' 00:23:43.733 18:50:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.733 18:50:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.301 18:50:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:44.560 [2024-07-25 18:50:44.953952] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.560 [2024-07-25 18:50:44.954261] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:44.560 [2024-07-25 18:50:44.957130] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.560 [2024-07-25 18:50:44.957359] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.560 [2024-07-25 18:50:44.957457] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.560 [2024-07-25 18:50:44.957541] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:23:44.560 0 00:23:44.560 18:50:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 136619 00:23:44.560 18:50:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 136619 ']' 00:23:44.560 18:50:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 136619 00:23:44.560 18:50:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:23:44.560 18:50:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:44.560 18:50:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136619 00:23:44.560 18:50:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:44.560 18:50:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:44.560 18:50:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136619' 00:23:44.560 killing process with pid 136619 00:23:44.560 18:50:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 136619 00:23:44.560 18:50:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 136619 00:23:44.560 [2024-07-25 18:50:45.009426] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:44.819 [2024-07-25 18:50:45.376575] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.ScVrvPuH58 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:23:46.723 00:23:46.723 real 0m8.871s 00:23:46.723 user 0m12.849s 00:23:46.723 sys 0m1.306s 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.723 18:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.723 ************************************ 00:23:46.723 END TEST raid_write_error_test 00:23:46.723 ************************************ 00:23:46.723 18:50:46 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:23:46.723 18:50:46 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:23:46.723 18:50:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:46.723 18:50:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.723 18:50:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:46.723 ************************************ 00:23:46.723 START TEST raid_state_function_test 00:23:46.723 ************************************ 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=136834 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136834' 00:23:46.723 Process raid pid: 136834 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 136834 /var/tmp/spdk-raid.sock 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 136834 ']' 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:46.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.723 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.723 [2024-07-25 18:50:47.088656] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:46.723 [2024-07-25 18:50:47.089049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.723 [2024-07-25 18:50:47.253616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.982 [2024-07-25 18:50:47.459891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.241 [2024-07-25 18:50:47.654069] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.499 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.499 18:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:23:47.499 18:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:47.757 [2024-07-25 18:50:48.203334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:47.757 [2024-07-25 18:50:48.203644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:47.758 [2024-07-25 18:50:48.203743] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:47.758 [2024-07-25 18:50:48.203801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:47.758 [2024-07-25 18:50:48.203880] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:47.758 [2024-07-25 18:50:48.203927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:47.758 [2024-07-25 18:50:48.203953] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:47.758 [2024-07-25 18:50:48.204043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.758 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.016 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.016 "name": "Existed_Raid", 00:23:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.016 "strip_size_kb": 64, 00:23:48.016 "state": "configuring", 00:23:48.016 "raid_level": "concat", 00:23:48.016 "superblock": false, 00:23:48.016 "num_base_bdevs": 4, 00:23:48.016 "num_base_bdevs_discovered": 0, 00:23:48.016 "num_base_bdevs_operational": 4, 00:23:48.016 "base_bdevs_list": [ 00:23:48.016 { 00:23:48.016 "name": "BaseBdev1", 00:23:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.016 "is_configured": false, 00:23:48.016 "data_offset": 0, 00:23:48.016 "data_size": 0 00:23:48.016 }, 00:23:48.016 { 00:23:48.016 "name": "BaseBdev2", 00:23:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.016 "is_configured": false, 00:23:48.016 "data_offset": 0, 00:23:48.016 "data_size": 0 00:23:48.016 }, 00:23:48.016 { 00:23:48.016 "name": "BaseBdev3", 00:23:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.016 "is_configured": false, 00:23:48.016 "data_offset": 0, 00:23:48.016 "data_size": 0 00:23:48.016 }, 00:23:48.016 { 00:23:48.016 "name": "BaseBdev4", 00:23:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.016 "is_configured": false, 00:23:48.016 "data_offset": 0, 00:23:48.016 "data_size": 0 00:23:48.016 } 00:23:48.016 ] 00:23:48.016 }' 00:23:48.016 18:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.016 18:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.583 18:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:48.841 [2024-07-25 18:50:49.275413] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:48.841 [2024-07-25 18:50:49.275610] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:23:48.841 18:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:49.100 [2024-07-25 18:50:49.459459] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:49.100 [2024-07-25 18:50:49.459715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:49.100 [2024-07-25 18:50:49.459847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:49.100 [2024-07-25 18:50:49.459930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:49.100 [2024-07-25 18:50:49.460011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:49.100 [2024-07-25 18:50:49.460079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:49.100 [2024-07-25 18:50:49.460106] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:49.100 [2024-07-25 18:50:49.460192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:49.100 18:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:49.359 [2024-07-25 18:50:49.722276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:49.359 BaseBdev1 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:49.359 18:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:49.637 [ 00:23:49.637 { 00:23:49.637 "name": "BaseBdev1", 00:23:49.637 "aliases": [ 00:23:49.637 "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb" 00:23:49.637 ], 00:23:49.637 "product_name": "Malloc disk", 00:23:49.637 "block_size": 512, 00:23:49.637 "num_blocks": 65536, 00:23:49.637 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:49.637 "assigned_rate_limits": { 00:23:49.637 "rw_ios_per_sec": 0, 00:23:49.637 "rw_mbytes_per_sec": 0, 00:23:49.637 "r_mbytes_per_sec": 0, 00:23:49.637 "w_mbytes_per_sec": 0 00:23:49.637 }, 00:23:49.637 "claimed": true, 00:23:49.637 "claim_type": "exclusive_write", 00:23:49.637 "zoned": false, 00:23:49.637 "supported_io_types": { 00:23:49.637 "read": true, 00:23:49.637 "write": true, 00:23:49.637 "unmap": true, 00:23:49.637 "flush": true, 00:23:49.637 "reset": true, 00:23:49.637 "nvme_admin": false, 00:23:49.637 "nvme_io": false, 00:23:49.637 "nvme_io_md": false, 00:23:49.637 "write_zeroes": true, 00:23:49.637 "zcopy": true, 00:23:49.637 "get_zone_info": false, 00:23:49.637 "zone_management": false, 00:23:49.637 "zone_append": false, 00:23:49.637 "compare": false, 00:23:49.637 "compare_and_write": false, 00:23:49.637 "abort": true, 00:23:49.637 "seek_hole": false, 00:23:49.637 "seek_data": false, 00:23:49.637 "copy": true, 00:23:49.637 "nvme_iov_md": false 00:23:49.637 }, 00:23:49.637 "memory_domains": [ 00:23:49.637 { 00:23:49.637 "dma_device_id": "system", 00:23:49.637 "dma_device_type": 1 00:23:49.637 }, 00:23:49.637 { 00:23:49.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.637 "dma_device_type": 2 00:23:49.637 } 00:23:49.637 ], 00:23:49.637 "driver_specific": {} 00:23:49.637 } 00:23:49.637 ] 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.637 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.949 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.949 "name": "Existed_Raid", 00:23:49.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.949 "strip_size_kb": 64, 00:23:49.949 "state": "configuring", 00:23:49.949 "raid_level": "concat", 00:23:49.949 "superblock": false, 00:23:49.949 "num_base_bdevs": 4, 00:23:49.949 "num_base_bdevs_discovered": 1, 00:23:49.949 "num_base_bdevs_operational": 4, 00:23:49.949 "base_bdevs_list": [ 00:23:49.949 { 00:23:49.949 "name": "BaseBdev1", 00:23:49.949 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:49.949 "is_configured": true, 00:23:49.949 "data_offset": 0, 00:23:49.949 "data_size": 65536 00:23:49.949 }, 00:23:49.949 { 00:23:49.949 "name": "BaseBdev2", 00:23:49.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.949 "is_configured": false, 00:23:49.949 "data_offset": 0, 00:23:49.949 "data_size": 0 00:23:49.949 }, 00:23:49.949 { 00:23:49.949 "name": "BaseBdev3", 00:23:49.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.949 "is_configured": false, 00:23:49.949 "data_offset": 0, 00:23:49.949 "data_size": 0 00:23:49.949 }, 00:23:49.949 { 00:23:49.949 "name": "BaseBdev4", 00:23:49.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.949 "is_configured": false, 00:23:49.949 "data_offset": 0, 00:23:49.949 "data_size": 0 00:23:49.949 } 00:23:49.949 ] 00:23:49.949 }' 00:23:49.949 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.949 18:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.516 18:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:50.775 [2024-07-25 18:50:51.098731] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:50.775 [2024-07-25 18:50:51.098986] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:23:50.775 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:50.775 [2024-07-25 18:50:51.342797] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:50.775 [2024-07-25 18:50:51.345215] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:50.775 [2024-07-25 18:50:51.345401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:50.775 [2024-07-25 18:50:51.345497] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:50.775 [2024-07-25 18:50:51.345555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:50.775 [2024-07-25 18:50:51.345583] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:50.775 [2024-07-25 18:50:51.345669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.033 "name": "Existed_Raid", 00:23:51.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.033 "strip_size_kb": 64, 00:23:51.033 "state": "configuring", 00:23:51.033 "raid_level": "concat", 00:23:51.033 "superblock": false, 00:23:51.033 "num_base_bdevs": 4, 00:23:51.033 "num_base_bdevs_discovered": 1, 00:23:51.033 "num_base_bdevs_operational": 4, 00:23:51.033 "base_bdevs_list": [ 00:23:51.033 { 00:23:51.033 "name": "BaseBdev1", 00:23:51.033 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:51.033 "is_configured": true, 00:23:51.033 "data_offset": 0, 00:23:51.033 "data_size": 65536 00:23:51.033 }, 00:23:51.033 { 00:23:51.033 "name": "BaseBdev2", 00:23:51.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.033 "is_configured": false, 00:23:51.033 "data_offset": 0, 00:23:51.033 "data_size": 0 00:23:51.033 }, 00:23:51.033 { 00:23:51.033 "name": "BaseBdev3", 00:23:51.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.033 "is_configured": false, 00:23:51.033 "data_offset": 0, 00:23:51.033 "data_size": 0 00:23:51.033 }, 00:23:51.033 { 00:23:51.033 "name": "BaseBdev4", 00:23:51.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.033 "is_configured": false, 00:23:51.033 "data_offset": 0, 00:23:51.033 "data_size": 0 00:23:51.033 } 00:23:51.033 ] 00:23:51.033 }' 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.033 18:50:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:51.968 [2024-07-25 18:50:52.485262] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:51.968 BaseBdev2 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:51.968 18:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:52.226 18:50:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:52.485 [ 00:23:52.485 { 00:23:52.485 "name": "BaseBdev2", 00:23:52.485 "aliases": [ 00:23:52.485 "ab767d5f-236f-4522-8ae6-f45e118dd05e" 00:23:52.485 ], 00:23:52.485 "product_name": "Malloc disk", 00:23:52.485 "block_size": 512, 00:23:52.485 "num_blocks": 65536, 00:23:52.485 "uuid": "ab767d5f-236f-4522-8ae6-f45e118dd05e", 00:23:52.485 "assigned_rate_limits": { 00:23:52.485 "rw_ios_per_sec": 0, 00:23:52.485 "rw_mbytes_per_sec": 0, 00:23:52.485 "r_mbytes_per_sec": 0, 00:23:52.485 "w_mbytes_per_sec": 0 00:23:52.485 }, 00:23:52.485 "claimed": true, 00:23:52.485 "claim_type": "exclusive_write", 00:23:52.485 "zoned": false, 00:23:52.485 "supported_io_types": { 00:23:52.485 "read": true, 00:23:52.485 "write": true, 00:23:52.485 "unmap": true, 00:23:52.485 "flush": true, 00:23:52.485 "reset": true, 00:23:52.485 "nvme_admin": false, 00:23:52.485 "nvme_io": false, 00:23:52.485 "nvme_io_md": false, 00:23:52.485 "write_zeroes": true, 00:23:52.485 "zcopy": true, 00:23:52.485 "get_zone_info": false, 00:23:52.485 "zone_management": false, 00:23:52.485 "zone_append": false, 00:23:52.485 "compare": false, 00:23:52.485 "compare_and_write": false, 00:23:52.485 "abort": true, 00:23:52.485 "seek_hole": false, 00:23:52.485 "seek_data": false, 00:23:52.485 "copy": true, 00:23:52.485 "nvme_iov_md": false 00:23:52.485 }, 00:23:52.485 "memory_domains": [ 00:23:52.485 { 00:23:52.485 "dma_device_id": "system", 00:23:52.485 "dma_device_type": 1 00:23:52.485 }, 00:23:52.485 { 00:23:52.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.485 "dma_device_type": 2 00:23:52.485 } 00:23:52.485 ], 00:23:52.485 "driver_specific": {} 00:23:52.485 } 00:23:52.485 ] 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.485 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.744 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.744 "name": "Existed_Raid", 00:23:52.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.744 "strip_size_kb": 64, 00:23:52.744 "state": "configuring", 00:23:52.744 "raid_level": "concat", 00:23:52.744 "superblock": false, 00:23:52.744 "num_base_bdevs": 4, 00:23:52.744 "num_base_bdevs_discovered": 2, 00:23:52.744 "num_base_bdevs_operational": 4, 00:23:52.744 "base_bdevs_list": [ 00:23:52.744 { 00:23:52.744 "name": "BaseBdev1", 00:23:52.744 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:52.744 "is_configured": true, 00:23:52.744 "data_offset": 0, 00:23:52.744 "data_size": 65536 00:23:52.744 }, 00:23:52.744 { 00:23:52.744 "name": "BaseBdev2", 00:23:52.744 "uuid": "ab767d5f-236f-4522-8ae6-f45e118dd05e", 00:23:52.744 "is_configured": true, 00:23:52.744 "data_offset": 0, 00:23:52.744 "data_size": 65536 00:23:52.744 }, 00:23:52.744 { 00:23:52.744 "name": "BaseBdev3", 00:23:52.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.744 "is_configured": false, 00:23:52.744 "data_offset": 0, 00:23:52.744 "data_size": 0 00:23:52.744 }, 00:23:52.744 { 00:23:52.744 "name": "BaseBdev4", 00:23:52.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.744 "is_configured": false, 00:23:52.744 "data_offset": 0, 00:23:52.744 "data_size": 0 00:23:52.744 } 00:23:52.744 ] 00:23:52.744 }' 00:23:52.744 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.744 18:50:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.311 18:50:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:53.570 [2024-07-25 18:50:54.134116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:53.570 BaseBdev3 00:23:53.828 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:53.828 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:53.828 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:53.828 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:53.828 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:53.828 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:53.828 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:54.087 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:54.087 [ 00:23:54.087 { 00:23:54.087 "name": "BaseBdev3", 00:23:54.087 "aliases": [ 00:23:54.087 "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf" 00:23:54.087 ], 00:23:54.087 "product_name": "Malloc disk", 00:23:54.087 "block_size": 512, 00:23:54.087 "num_blocks": 65536, 00:23:54.087 "uuid": "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf", 00:23:54.087 "assigned_rate_limits": { 00:23:54.087 "rw_ios_per_sec": 0, 00:23:54.087 "rw_mbytes_per_sec": 0, 00:23:54.087 "r_mbytes_per_sec": 0, 00:23:54.087 "w_mbytes_per_sec": 0 00:23:54.087 }, 00:23:54.087 "claimed": true, 00:23:54.087 "claim_type": "exclusive_write", 00:23:54.087 "zoned": false, 00:23:54.087 "supported_io_types": { 00:23:54.087 "read": true, 00:23:54.087 "write": true, 00:23:54.087 "unmap": true, 00:23:54.087 "flush": true, 00:23:54.087 "reset": true, 00:23:54.087 "nvme_admin": false, 00:23:54.087 "nvme_io": false, 00:23:54.087 "nvme_io_md": false, 00:23:54.087 "write_zeroes": true, 00:23:54.087 "zcopy": true, 00:23:54.087 "get_zone_info": false, 00:23:54.087 "zone_management": false, 00:23:54.087 "zone_append": false, 00:23:54.087 "compare": false, 00:23:54.087 "compare_and_write": false, 00:23:54.087 "abort": true, 00:23:54.087 "seek_hole": false, 00:23:54.087 "seek_data": false, 00:23:54.087 "copy": true, 00:23:54.087 "nvme_iov_md": false 00:23:54.087 }, 00:23:54.087 "memory_domains": [ 00:23:54.087 { 00:23:54.087 "dma_device_id": "system", 00:23:54.087 "dma_device_type": 1 00:23:54.087 }, 00:23:54.087 { 00:23:54.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.087 "dma_device_type": 2 00:23:54.087 } 00:23:54.087 ], 00:23:54.087 "driver_specific": {} 00:23:54.087 } 00:23:54.087 ] 00:23:54.087 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.088 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.347 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.347 "name": "Existed_Raid", 00:23:54.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.347 "strip_size_kb": 64, 00:23:54.347 "state": "configuring", 00:23:54.347 "raid_level": "concat", 00:23:54.347 "superblock": false, 00:23:54.347 "num_base_bdevs": 4, 00:23:54.347 "num_base_bdevs_discovered": 3, 00:23:54.347 "num_base_bdevs_operational": 4, 00:23:54.347 "base_bdevs_list": [ 00:23:54.347 { 00:23:54.347 "name": "BaseBdev1", 00:23:54.347 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:54.347 "is_configured": true, 00:23:54.347 "data_offset": 0, 00:23:54.347 "data_size": 65536 00:23:54.347 }, 00:23:54.347 { 00:23:54.347 "name": "BaseBdev2", 00:23:54.347 "uuid": "ab767d5f-236f-4522-8ae6-f45e118dd05e", 00:23:54.347 "is_configured": true, 00:23:54.347 "data_offset": 0, 00:23:54.347 "data_size": 65536 00:23:54.347 }, 00:23:54.347 { 00:23:54.347 "name": "BaseBdev3", 00:23:54.347 "uuid": "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf", 00:23:54.347 "is_configured": true, 00:23:54.347 "data_offset": 0, 00:23:54.347 "data_size": 65536 00:23:54.347 }, 00:23:54.347 { 00:23:54.347 "name": "BaseBdev4", 00:23:54.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.347 "is_configured": false, 00:23:54.347 "data_offset": 0, 00:23:54.347 "data_size": 0 00:23:54.347 } 00:23:54.347 ] 00:23:54.347 }' 00:23:54.347 18:50:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.347 18:50:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.915 18:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:55.173 [2024-07-25 18:50:55.574812] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:55.173 [2024-07-25 18:50:55.575112] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:23:55.173 [2024-07-25 18:50:55.575152] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:55.173 [2024-07-25 18:50:55.575380] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:55.173 [2024-07-25 18:50:55.575853] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:23:55.173 [2024-07-25 18:50:55.575965] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:23:55.173 [2024-07-25 18:50:55.576308] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.173 BaseBdev4 00:23:55.173 18:50:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:55.173 18:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:55.173 18:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:55.173 18:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:55.173 18:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:55.173 18:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:55.173 18:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:55.432 18:50:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:55.691 [ 00:23:55.691 { 00:23:55.691 "name": "BaseBdev4", 00:23:55.691 "aliases": [ 00:23:55.691 "e78993c9-1202-4b1b-a484-1e1a5916d9ef" 00:23:55.691 ], 00:23:55.691 "product_name": "Malloc disk", 00:23:55.691 "block_size": 512, 00:23:55.691 "num_blocks": 65536, 00:23:55.691 "uuid": "e78993c9-1202-4b1b-a484-1e1a5916d9ef", 00:23:55.691 "assigned_rate_limits": { 00:23:55.691 "rw_ios_per_sec": 0, 00:23:55.691 "rw_mbytes_per_sec": 0, 00:23:55.691 "r_mbytes_per_sec": 0, 00:23:55.691 "w_mbytes_per_sec": 0 00:23:55.691 }, 00:23:55.691 "claimed": true, 00:23:55.691 "claim_type": "exclusive_write", 00:23:55.691 "zoned": false, 00:23:55.691 "supported_io_types": { 00:23:55.691 "read": true, 00:23:55.691 "write": true, 00:23:55.691 "unmap": true, 00:23:55.691 "flush": true, 00:23:55.691 "reset": true, 00:23:55.691 "nvme_admin": false, 00:23:55.691 "nvme_io": false, 00:23:55.691 "nvme_io_md": false, 00:23:55.691 "write_zeroes": true, 00:23:55.691 "zcopy": true, 00:23:55.691 "get_zone_info": false, 00:23:55.691 "zone_management": false, 00:23:55.691 "zone_append": false, 00:23:55.691 "compare": false, 00:23:55.691 "compare_and_write": false, 00:23:55.691 "abort": true, 00:23:55.691 "seek_hole": false, 00:23:55.691 "seek_data": false, 00:23:55.691 "copy": true, 00:23:55.691 "nvme_iov_md": false 00:23:55.691 }, 00:23:55.691 "memory_domains": [ 00:23:55.691 { 00:23:55.691 "dma_device_id": "system", 00:23:55.691 "dma_device_type": 1 00:23:55.691 }, 00:23:55.691 { 00:23:55.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.691 "dma_device_type": 2 00:23:55.691 } 00:23:55.691 ], 00:23:55.691 "driver_specific": {} 00:23:55.691 } 00:23:55.691 ] 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.691 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.691 "name": "Existed_Raid", 00:23:55.692 "uuid": "06fae219-5814-46de-99f3-d502a7087ce4", 00:23:55.692 "strip_size_kb": 64, 00:23:55.692 "state": "online", 00:23:55.692 "raid_level": "concat", 00:23:55.692 "superblock": false, 00:23:55.692 "num_base_bdevs": 4, 00:23:55.692 "num_base_bdevs_discovered": 4, 00:23:55.692 "num_base_bdevs_operational": 4, 00:23:55.692 "base_bdevs_list": [ 00:23:55.692 { 00:23:55.692 "name": "BaseBdev1", 00:23:55.692 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:55.692 "is_configured": true, 00:23:55.692 "data_offset": 0, 00:23:55.692 "data_size": 65536 00:23:55.692 }, 00:23:55.692 { 00:23:55.692 "name": "BaseBdev2", 00:23:55.692 "uuid": "ab767d5f-236f-4522-8ae6-f45e118dd05e", 00:23:55.692 "is_configured": true, 00:23:55.692 "data_offset": 0, 00:23:55.692 "data_size": 65536 00:23:55.692 }, 00:23:55.692 { 00:23:55.692 "name": "BaseBdev3", 00:23:55.692 "uuid": "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf", 00:23:55.692 "is_configured": true, 00:23:55.692 "data_offset": 0, 00:23:55.692 "data_size": 65536 00:23:55.692 }, 00:23:55.692 { 00:23:55.692 "name": "BaseBdev4", 00:23:55.692 "uuid": "e78993c9-1202-4b1b-a484-1e1a5916d9ef", 00:23:55.692 "is_configured": true, 00:23:55.692 "data_offset": 0, 00:23:55.692 "data_size": 65536 00:23:55.692 } 00:23:55.692 ] 00:23:55.692 }' 00:23:55.692 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.692 18:50:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:56.260 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:56.519 [2024-07-25 18:50:56.943381] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.519 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:56.519 "name": "Existed_Raid", 00:23:56.519 "aliases": [ 00:23:56.519 "06fae219-5814-46de-99f3-d502a7087ce4" 00:23:56.519 ], 00:23:56.519 "product_name": "Raid Volume", 00:23:56.519 "block_size": 512, 00:23:56.519 "num_blocks": 262144, 00:23:56.519 "uuid": "06fae219-5814-46de-99f3-d502a7087ce4", 00:23:56.519 "assigned_rate_limits": { 00:23:56.519 "rw_ios_per_sec": 0, 00:23:56.519 "rw_mbytes_per_sec": 0, 00:23:56.519 "r_mbytes_per_sec": 0, 00:23:56.519 "w_mbytes_per_sec": 0 00:23:56.519 }, 00:23:56.519 "claimed": false, 00:23:56.519 "zoned": false, 00:23:56.519 "supported_io_types": { 00:23:56.519 "read": true, 00:23:56.519 "write": true, 00:23:56.519 "unmap": true, 00:23:56.519 "flush": true, 00:23:56.519 "reset": true, 00:23:56.519 "nvme_admin": false, 00:23:56.519 "nvme_io": false, 00:23:56.519 "nvme_io_md": false, 00:23:56.519 "write_zeroes": true, 00:23:56.519 "zcopy": false, 00:23:56.519 "get_zone_info": false, 00:23:56.519 "zone_management": false, 00:23:56.519 "zone_append": false, 00:23:56.519 "compare": false, 00:23:56.519 "compare_and_write": false, 00:23:56.519 "abort": false, 00:23:56.519 "seek_hole": false, 00:23:56.519 "seek_data": false, 00:23:56.519 "copy": false, 00:23:56.519 "nvme_iov_md": false 00:23:56.519 }, 00:23:56.519 "memory_domains": [ 00:23:56.519 { 00:23:56.519 "dma_device_id": "system", 00:23:56.519 "dma_device_type": 1 00:23:56.519 }, 00:23:56.519 { 00:23:56.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.519 "dma_device_type": 2 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "dma_device_id": "system", 00:23:56.520 "dma_device_type": 1 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.520 "dma_device_type": 2 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "dma_device_id": "system", 00:23:56.520 "dma_device_type": 1 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.520 "dma_device_type": 2 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "dma_device_id": "system", 00:23:56.520 "dma_device_type": 1 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.520 "dma_device_type": 2 00:23:56.520 } 00:23:56.520 ], 00:23:56.520 "driver_specific": { 00:23:56.520 "raid": { 00:23:56.520 "uuid": "06fae219-5814-46de-99f3-d502a7087ce4", 00:23:56.520 "strip_size_kb": 64, 00:23:56.520 "state": "online", 00:23:56.520 "raid_level": "concat", 00:23:56.520 "superblock": false, 00:23:56.520 "num_base_bdevs": 4, 00:23:56.520 "num_base_bdevs_discovered": 4, 00:23:56.520 "num_base_bdevs_operational": 4, 00:23:56.520 "base_bdevs_list": [ 00:23:56.520 { 00:23:56.520 "name": "BaseBdev1", 00:23:56.520 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:56.520 "is_configured": true, 00:23:56.520 "data_offset": 0, 00:23:56.520 "data_size": 65536 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "name": "BaseBdev2", 00:23:56.520 "uuid": "ab767d5f-236f-4522-8ae6-f45e118dd05e", 00:23:56.520 "is_configured": true, 00:23:56.520 "data_offset": 0, 00:23:56.520 "data_size": 65536 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "name": "BaseBdev3", 00:23:56.520 "uuid": "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf", 00:23:56.520 "is_configured": true, 00:23:56.520 "data_offset": 0, 00:23:56.520 "data_size": 65536 00:23:56.520 }, 00:23:56.520 { 00:23:56.520 "name": "BaseBdev4", 00:23:56.520 "uuid": "e78993c9-1202-4b1b-a484-1e1a5916d9ef", 00:23:56.520 "is_configured": true, 00:23:56.520 "data_offset": 0, 00:23:56.520 "data_size": 65536 00:23:56.520 } 00:23:56.520 ] 00:23:56.520 } 00:23:56.520 } 00:23:56.520 }' 00:23:56.520 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:56.520 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:56.520 BaseBdev2 00:23:56.520 BaseBdev3 00:23:56.520 BaseBdev4' 00:23:56.520 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:56.520 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:56.520 18:50:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:56.779 "name": "BaseBdev1", 00:23:56.779 "aliases": [ 00:23:56.779 "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb" 00:23:56.779 ], 00:23:56.779 "product_name": "Malloc disk", 00:23:56.779 "block_size": 512, 00:23:56.779 "num_blocks": 65536, 00:23:56.779 "uuid": "abb10bf1-1a70-4ca1-883f-3ca7c9b400fb", 00:23:56.779 "assigned_rate_limits": { 00:23:56.779 "rw_ios_per_sec": 0, 00:23:56.779 "rw_mbytes_per_sec": 0, 00:23:56.779 "r_mbytes_per_sec": 0, 00:23:56.779 "w_mbytes_per_sec": 0 00:23:56.779 }, 00:23:56.779 "claimed": true, 00:23:56.779 "claim_type": "exclusive_write", 00:23:56.779 "zoned": false, 00:23:56.779 "supported_io_types": { 00:23:56.779 "read": true, 00:23:56.779 "write": true, 00:23:56.779 "unmap": true, 00:23:56.779 "flush": true, 00:23:56.779 "reset": true, 00:23:56.779 "nvme_admin": false, 00:23:56.779 "nvme_io": false, 00:23:56.779 "nvme_io_md": false, 00:23:56.779 "write_zeroes": true, 00:23:56.779 "zcopy": true, 00:23:56.779 "get_zone_info": false, 00:23:56.779 "zone_management": false, 00:23:56.779 "zone_append": false, 00:23:56.779 "compare": false, 00:23:56.779 "compare_and_write": false, 00:23:56.779 "abort": true, 00:23:56.779 "seek_hole": false, 00:23:56.779 "seek_data": false, 00:23:56.779 "copy": true, 00:23:56.779 "nvme_iov_md": false 00:23:56.779 }, 00:23:56.779 "memory_domains": [ 00:23:56.779 { 00:23:56.779 "dma_device_id": "system", 00:23:56.779 "dma_device_type": 1 00:23:56.779 }, 00:23:56.779 { 00:23:56.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.779 "dma_device_type": 2 00:23:56.779 } 00:23:56.779 ], 00:23:56.779 "driver_specific": {} 00:23:56.779 }' 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:56.779 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.038 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.038 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.039 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.039 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.039 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.039 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.039 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:57.039 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.298 "name": "BaseBdev2", 00:23:57.298 "aliases": [ 00:23:57.298 "ab767d5f-236f-4522-8ae6-f45e118dd05e" 00:23:57.298 ], 00:23:57.298 "product_name": "Malloc disk", 00:23:57.298 "block_size": 512, 00:23:57.298 "num_blocks": 65536, 00:23:57.298 "uuid": "ab767d5f-236f-4522-8ae6-f45e118dd05e", 00:23:57.298 "assigned_rate_limits": { 00:23:57.298 "rw_ios_per_sec": 0, 00:23:57.298 "rw_mbytes_per_sec": 0, 00:23:57.298 "r_mbytes_per_sec": 0, 00:23:57.298 "w_mbytes_per_sec": 0 00:23:57.298 }, 00:23:57.298 "claimed": true, 00:23:57.298 "claim_type": "exclusive_write", 00:23:57.298 "zoned": false, 00:23:57.298 "supported_io_types": { 00:23:57.298 "read": true, 00:23:57.298 "write": true, 00:23:57.298 "unmap": true, 00:23:57.298 "flush": true, 00:23:57.298 "reset": true, 00:23:57.298 "nvme_admin": false, 00:23:57.298 "nvme_io": false, 00:23:57.298 "nvme_io_md": false, 00:23:57.298 "write_zeroes": true, 00:23:57.298 "zcopy": true, 00:23:57.298 "get_zone_info": false, 00:23:57.298 "zone_management": false, 00:23:57.298 "zone_append": false, 00:23:57.298 "compare": false, 00:23:57.298 "compare_and_write": false, 00:23:57.298 "abort": true, 00:23:57.298 "seek_hole": false, 00:23:57.298 "seek_data": false, 00:23:57.298 "copy": true, 00:23:57.298 "nvme_iov_md": false 00:23:57.298 }, 00:23:57.298 "memory_domains": [ 00:23:57.298 { 00:23:57.298 "dma_device_id": "system", 00:23:57.298 "dma_device_type": 1 00:23:57.298 }, 00:23:57.298 { 00:23:57.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.298 "dma_device_type": 2 00:23:57.298 } 00:23:57.298 ], 00:23:57.298 "driver_specific": {} 00:23:57.298 }' 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.298 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.557 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.557 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.557 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.557 18:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.557 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.557 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.557 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:57.557 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.816 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.816 "name": "BaseBdev3", 00:23:57.816 "aliases": [ 00:23:57.816 "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf" 00:23:57.816 ], 00:23:57.816 "product_name": "Malloc disk", 00:23:57.816 "block_size": 512, 00:23:57.816 "num_blocks": 65536, 00:23:57.816 "uuid": "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf", 00:23:57.816 "assigned_rate_limits": { 00:23:57.816 "rw_ios_per_sec": 0, 00:23:57.816 "rw_mbytes_per_sec": 0, 00:23:57.816 "r_mbytes_per_sec": 0, 00:23:57.816 "w_mbytes_per_sec": 0 00:23:57.816 }, 00:23:57.816 "claimed": true, 00:23:57.816 "claim_type": "exclusive_write", 00:23:57.816 "zoned": false, 00:23:57.816 "supported_io_types": { 00:23:57.816 "read": true, 00:23:57.816 "write": true, 00:23:57.816 "unmap": true, 00:23:57.816 "flush": true, 00:23:57.816 "reset": true, 00:23:57.816 "nvme_admin": false, 00:23:57.816 "nvme_io": false, 00:23:57.816 "nvme_io_md": false, 00:23:57.816 "write_zeroes": true, 00:23:57.816 "zcopy": true, 00:23:57.816 "get_zone_info": false, 00:23:57.816 "zone_management": false, 00:23:57.816 "zone_append": false, 00:23:57.816 "compare": false, 00:23:57.816 "compare_and_write": false, 00:23:57.816 "abort": true, 00:23:57.816 "seek_hole": false, 00:23:57.816 "seek_data": false, 00:23:57.816 "copy": true, 00:23:57.816 "nvme_iov_md": false 00:23:57.816 }, 00:23:57.816 "memory_domains": [ 00:23:57.816 { 00:23:57.816 "dma_device_id": "system", 00:23:57.816 "dma_device_type": 1 00:23:57.816 }, 00:23:57.816 { 00:23:57.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.816 "dma_device_type": 2 00:23:57.816 } 00:23:57.816 ], 00:23:57.816 "driver_specific": {} 00:23:57.816 }' 00:23:57.816 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.816 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.816 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.816 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:58.075 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:58.644 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.644 "name": "BaseBdev4", 00:23:58.644 "aliases": [ 00:23:58.644 "e78993c9-1202-4b1b-a484-1e1a5916d9ef" 00:23:58.644 ], 00:23:58.644 "product_name": "Malloc disk", 00:23:58.644 "block_size": 512, 00:23:58.644 "num_blocks": 65536, 00:23:58.644 "uuid": "e78993c9-1202-4b1b-a484-1e1a5916d9ef", 00:23:58.644 "assigned_rate_limits": { 00:23:58.644 "rw_ios_per_sec": 0, 00:23:58.644 "rw_mbytes_per_sec": 0, 00:23:58.644 "r_mbytes_per_sec": 0, 00:23:58.644 "w_mbytes_per_sec": 0 00:23:58.644 }, 00:23:58.644 "claimed": true, 00:23:58.644 "claim_type": "exclusive_write", 00:23:58.644 "zoned": false, 00:23:58.644 "supported_io_types": { 00:23:58.644 "read": true, 00:23:58.644 "write": true, 00:23:58.644 "unmap": true, 00:23:58.644 "flush": true, 00:23:58.644 "reset": true, 00:23:58.644 "nvme_admin": false, 00:23:58.644 "nvme_io": false, 00:23:58.644 "nvme_io_md": false, 00:23:58.644 "write_zeroes": true, 00:23:58.644 "zcopy": true, 00:23:58.644 "get_zone_info": false, 00:23:58.644 "zone_management": false, 00:23:58.644 "zone_append": false, 00:23:58.644 "compare": false, 00:23:58.644 "compare_and_write": false, 00:23:58.644 "abort": true, 00:23:58.644 "seek_hole": false, 00:23:58.644 "seek_data": false, 00:23:58.644 "copy": true, 00:23:58.644 "nvme_iov_md": false 00:23:58.644 }, 00:23:58.644 "memory_domains": [ 00:23:58.644 { 00:23:58.644 "dma_device_id": "system", 00:23:58.644 "dma_device_type": 1 00:23:58.644 }, 00:23:58.644 { 00:23:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.644 "dma_device_type": 2 00:23:58.644 } 00:23:58.644 ], 00:23:58.644 "driver_specific": {} 00:23:58.644 }' 00:23:58.644 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.644 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.644 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:58.644 18:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.644 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.644 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:58.644 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.644 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.644 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:58.644 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.644 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.903 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.903 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:59.162 [2024-07-25 18:50:59.495700] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.162 [2024-07-25 18:50:59.495923] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.162 [2024-07-25 18:50:59.496110] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.162 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.421 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.421 "name": "Existed_Raid", 00:23:59.421 "uuid": "06fae219-5814-46de-99f3-d502a7087ce4", 00:23:59.421 "strip_size_kb": 64, 00:23:59.421 "state": "offline", 00:23:59.421 "raid_level": "concat", 00:23:59.421 "superblock": false, 00:23:59.421 "num_base_bdevs": 4, 00:23:59.421 "num_base_bdevs_discovered": 3, 00:23:59.421 "num_base_bdevs_operational": 3, 00:23:59.421 "base_bdevs_list": [ 00:23:59.421 { 00:23:59.421 "name": null, 00:23:59.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.421 "is_configured": false, 00:23:59.421 "data_offset": 0, 00:23:59.421 "data_size": 65536 00:23:59.421 }, 00:23:59.421 { 00:23:59.421 "name": "BaseBdev2", 00:23:59.421 "uuid": "ab767d5f-236f-4522-8ae6-f45e118dd05e", 00:23:59.421 "is_configured": true, 00:23:59.421 "data_offset": 0, 00:23:59.421 "data_size": 65536 00:23:59.421 }, 00:23:59.421 { 00:23:59.421 "name": "BaseBdev3", 00:23:59.421 "uuid": "f43113fe-b11a-4c2c-90d7-e2943ca6b3bf", 00:23:59.421 "is_configured": true, 00:23:59.421 "data_offset": 0, 00:23:59.421 "data_size": 65536 00:23:59.421 }, 00:23:59.421 { 00:23:59.421 "name": "BaseBdev4", 00:23:59.421 "uuid": "e78993c9-1202-4b1b-a484-1e1a5916d9ef", 00:23:59.421 "is_configured": true, 00:23:59.421 "data_offset": 0, 00:23:59.421 "data_size": 65536 00:23:59.421 } 00:23:59.421 ] 00:23:59.421 }' 00:23:59.421 18:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.421 18:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.990 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:59.990 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:59.990 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:59.990 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.990 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:59.990 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:59.990 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:00.247 [2024-07-25 18:51:00.788777] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:00.505 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:00.505 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:00.505 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:00.505 18:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.763 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:00.763 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:00.763 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:00.763 [2024-07-25 18:51:01.315640] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:01.021 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:01.021 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:01.021 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.021 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:01.291 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:01.291 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:01.291 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:01.291 [2024-07-25 18:51:01.834833] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:01.291 [2024-07-25 18:51:01.835073] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:24:01.586 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:01.586 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:01.586 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.586 18:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:01.586 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:01.586 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:01.586 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:01.586 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:01.586 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:01.586 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:01.844 BaseBdev2 00:24:01.844 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:01.844 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:01.844 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:01.844 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:01.844 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:01.844 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:01.844 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.103 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:02.362 [ 00:24:02.362 { 00:24:02.362 "name": "BaseBdev2", 00:24:02.362 "aliases": [ 00:24:02.362 "d3d4a164-e1c3-4af3-b2bf-33736d962229" 00:24:02.362 ], 00:24:02.362 "product_name": "Malloc disk", 00:24:02.362 "block_size": 512, 00:24:02.362 "num_blocks": 65536, 00:24:02.362 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:02.362 "assigned_rate_limits": { 00:24:02.362 "rw_ios_per_sec": 0, 00:24:02.362 "rw_mbytes_per_sec": 0, 00:24:02.362 "r_mbytes_per_sec": 0, 00:24:02.362 "w_mbytes_per_sec": 0 00:24:02.362 }, 00:24:02.362 "claimed": false, 00:24:02.362 "zoned": false, 00:24:02.362 "supported_io_types": { 00:24:02.362 "read": true, 00:24:02.362 "write": true, 00:24:02.362 "unmap": true, 00:24:02.362 "flush": true, 00:24:02.362 "reset": true, 00:24:02.362 "nvme_admin": false, 00:24:02.362 "nvme_io": false, 00:24:02.362 "nvme_io_md": false, 00:24:02.362 "write_zeroes": true, 00:24:02.362 "zcopy": true, 00:24:02.362 "get_zone_info": false, 00:24:02.362 "zone_management": false, 00:24:02.362 "zone_append": false, 00:24:02.362 "compare": false, 00:24:02.362 "compare_and_write": false, 00:24:02.362 "abort": true, 00:24:02.362 "seek_hole": false, 00:24:02.362 "seek_data": false, 00:24:02.362 "copy": true, 00:24:02.362 "nvme_iov_md": false 00:24:02.362 }, 00:24:02.362 "memory_domains": [ 00:24:02.362 { 00:24:02.362 "dma_device_id": "system", 00:24:02.362 "dma_device_type": 1 00:24:02.362 }, 00:24:02.362 { 00:24:02.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.362 "dma_device_type": 2 00:24:02.362 } 00:24:02.362 ], 00:24:02.362 "driver_specific": {} 00:24:02.362 } 00:24:02.362 ] 00:24:02.362 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:02.362 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:02.362 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:02.362 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:02.621 BaseBdev3 00:24:02.621 18:51:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:02.621 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:02.621 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:02.621 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:02.621 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:02.621 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:02.621 18:51:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.621 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:02.881 [ 00:24:02.881 { 00:24:02.881 "name": "BaseBdev3", 00:24:02.881 "aliases": [ 00:24:02.881 "cac3bd14-4648-46f4-a4e7-06562ea3e0cb" 00:24:02.881 ], 00:24:02.881 "product_name": "Malloc disk", 00:24:02.881 "block_size": 512, 00:24:02.881 "num_blocks": 65536, 00:24:02.881 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:02.881 "assigned_rate_limits": { 00:24:02.881 "rw_ios_per_sec": 0, 00:24:02.881 "rw_mbytes_per_sec": 0, 00:24:02.881 "r_mbytes_per_sec": 0, 00:24:02.881 "w_mbytes_per_sec": 0 00:24:02.881 }, 00:24:02.881 "claimed": false, 00:24:02.881 "zoned": false, 00:24:02.881 "supported_io_types": { 00:24:02.881 "read": true, 00:24:02.881 "write": true, 00:24:02.881 "unmap": true, 00:24:02.881 "flush": true, 00:24:02.881 "reset": true, 00:24:02.881 "nvme_admin": false, 00:24:02.881 "nvme_io": false, 00:24:02.881 "nvme_io_md": false, 00:24:02.881 "write_zeroes": true, 00:24:02.881 "zcopy": true, 00:24:02.881 "get_zone_info": false, 00:24:02.881 "zone_management": false, 00:24:02.881 "zone_append": false, 00:24:02.881 "compare": false, 00:24:02.881 "compare_and_write": false, 00:24:02.881 "abort": true, 00:24:02.881 "seek_hole": false, 00:24:02.881 "seek_data": false, 00:24:02.881 "copy": true, 00:24:02.881 "nvme_iov_md": false 00:24:02.881 }, 00:24:02.881 "memory_domains": [ 00:24:02.881 { 00:24:02.881 "dma_device_id": "system", 00:24:02.881 "dma_device_type": 1 00:24:02.881 }, 00:24:02.881 { 00:24:02.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.881 "dma_device_type": 2 00:24:02.881 } 00:24:02.881 ], 00:24:02.881 "driver_specific": {} 00:24:02.881 } 00:24:02.881 ] 00:24:02.881 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:02.881 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:02.881 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:02.881 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:03.139 BaseBdev4 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:03.139 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:03.397 [ 00:24:03.397 { 00:24:03.397 "name": "BaseBdev4", 00:24:03.397 "aliases": [ 00:24:03.397 "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43" 00:24:03.397 ], 00:24:03.397 "product_name": "Malloc disk", 00:24:03.397 "block_size": 512, 00:24:03.397 "num_blocks": 65536, 00:24:03.397 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:03.397 "assigned_rate_limits": { 00:24:03.397 "rw_ios_per_sec": 0, 00:24:03.397 "rw_mbytes_per_sec": 0, 00:24:03.397 "r_mbytes_per_sec": 0, 00:24:03.397 "w_mbytes_per_sec": 0 00:24:03.397 }, 00:24:03.397 "claimed": false, 00:24:03.397 "zoned": false, 00:24:03.397 "supported_io_types": { 00:24:03.397 "read": true, 00:24:03.397 "write": true, 00:24:03.397 "unmap": true, 00:24:03.397 "flush": true, 00:24:03.397 "reset": true, 00:24:03.397 "nvme_admin": false, 00:24:03.397 "nvme_io": false, 00:24:03.397 "nvme_io_md": false, 00:24:03.397 "write_zeroes": true, 00:24:03.397 "zcopy": true, 00:24:03.397 "get_zone_info": false, 00:24:03.397 "zone_management": false, 00:24:03.397 "zone_append": false, 00:24:03.397 "compare": false, 00:24:03.397 "compare_and_write": false, 00:24:03.397 "abort": true, 00:24:03.397 "seek_hole": false, 00:24:03.397 "seek_data": false, 00:24:03.397 "copy": true, 00:24:03.397 "nvme_iov_md": false 00:24:03.397 }, 00:24:03.397 "memory_domains": [ 00:24:03.397 { 00:24:03.397 "dma_device_id": "system", 00:24:03.397 "dma_device_type": 1 00:24:03.397 }, 00:24:03.397 { 00:24:03.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.397 "dma_device_type": 2 00:24:03.397 } 00:24:03.397 ], 00:24:03.397 "driver_specific": {} 00:24:03.397 } 00:24:03.397 ] 00:24:03.397 18:51:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:03.397 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:03.397 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:03.397 18:51:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:03.655 [2024-07-25 18:51:04.041358] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:03.655 [2024-07-25 18:51:04.041641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:03.655 [2024-07-25 18:51:04.041802] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.655 [2024-07-25 18:51:04.044223] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:03.655 [2024-07-25 18:51:04.044407] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.655 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.913 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.913 "name": "Existed_Raid", 00:24:03.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.913 "strip_size_kb": 64, 00:24:03.913 "state": "configuring", 00:24:03.913 "raid_level": "concat", 00:24:03.913 "superblock": false, 00:24:03.913 "num_base_bdevs": 4, 00:24:03.913 "num_base_bdevs_discovered": 3, 00:24:03.913 "num_base_bdevs_operational": 4, 00:24:03.913 "base_bdevs_list": [ 00:24:03.913 { 00:24:03.913 "name": "BaseBdev1", 00:24:03.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.913 "is_configured": false, 00:24:03.913 "data_offset": 0, 00:24:03.913 "data_size": 0 00:24:03.913 }, 00:24:03.913 { 00:24:03.913 "name": "BaseBdev2", 00:24:03.913 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:03.913 "is_configured": true, 00:24:03.913 "data_offset": 0, 00:24:03.913 "data_size": 65536 00:24:03.913 }, 00:24:03.913 { 00:24:03.913 "name": "BaseBdev3", 00:24:03.913 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:03.913 "is_configured": true, 00:24:03.913 "data_offset": 0, 00:24:03.913 "data_size": 65536 00:24:03.913 }, 00:24:03.913 { 00:24:03.913 "name": "BaseBdev4", 00:24:03.913 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:03.913 "is_configured": true, 00:24:03.913 "data_offset": 0, 00:24:03.913 "data_size": 65536 00:24:03.913 } 00:24:03.913 ] 00:24:03.913 }' 00:24:03.913 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.913 18:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.480 18:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:04.480 [2024-07-25 18:51:05.013543] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.480 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.048 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.048 "name": "Existed_Raid", 00:24:05.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.048 "strip_size_kb": 64, 00:24:05.048 "state": "configuring", 00:24:05.048 "raid_level": "concat", 00:24:05.048 "superblock": false, 00:24:05.048 "num_base_bdevs": 4, 00:24:05.048 "num_base_bdevs_discovered": 2, 00:24:05.048 "num_base_bdevs_operational": 4, 00:24:05.048 "base_bdevs_list": [ 00:24:05.048 { 00:24:05.048 "name": "BaseBdev1", 00:24:05.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.048 "is_configured": false, 00:24:05.048 "data_offset": 0, 00:24:05.048 "data_size": 0 00:24:05.048 }, 00:24:05.048 { 00:24:05.048 "name": null, 00:24:05.048 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:05.048 "is_configured": false, 00:24:05.048 "data_offset": 0, 00:24:05.048 "data_size": 65536 00:24:05.048 }, 00:24:05.048 { 00:24:05.048 "name": "BaseBdev3", 00:24:05.048 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:05.048 "is_configured": true, 00:24:05.048 "data_offset": 0, 00:24:05.048 "data_size": 65536 00:24:05.048 }, 00:24:05.048 { 00:24:05.048 "name": "BaseBdev4", 00:24:05.048 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:05.048 "is_configured": true, 00:24:05.048 "data_offset": 0, 00:24:05.048 "data_size": 65536 00:24:05.048 } 00:24:05.048 ] 00:24:05.048 }' 00:24:05.048 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.048 18:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.615 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:05.615 18:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.615 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:05.615 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:05.874 [2024-07-25 18:51:06.393517] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.874 BaseBdev1 00:24:05.874 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:05.874 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:05.874 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:05.874 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:05.874 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:05.874 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:05.874 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.133 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:06.392 [ 00:24:06.392 { 00:24:06.392 "name": "BaseBdev1", 00:24:06.392 "aliases": [ 00:24:06.392 "3fbb8045-0ecb-4b27-b8d3-5d69a58be894" 00:24:06.392 ], 00:24:06.392 "product_name": "Malloc disk", 00:24:06.392 "block_size": 512, 00:24:06.392 "num_blocks": 65536, 00:24:06.392 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:06.392 "assigned_rate_limits": { 00:24:06.392 "rw_ios_per_sec": 0, 00:24:06.392 "rw_mbytes_per_sec": 0, 00:24:06.392 "r_mbytes_per_sec": 0, 00:24:06.392 "w_mbytes_per_sec": 0 00:24:06.392 }, 00:24:06.392 "claimed": true, 00:24:06.392 "claim_type": "exclusive_write", 00:24:06.392 "zoned": false, 00:24:06.392 "supported_io_types": { 00:24:06.392 "read": true, 00:24:06.392 "write": true, 00:24:06.392 "unmap": true, 00:24:06.392 "flush": true, 00:24:06.392 "reset": true, 00:24:06.392 "nvme_admin": false, 00:24:06.392 "nvme_io": false, 00:24:06.392 "nvme_io_md": false, 00:24:06.392 "write_zeroes": true, 00:24:06.392 "zcopy": true, 00:24:06.392 "get_zone_info": false, 00:24:06.392 "zone_management": false, 00:24:06.392 "zone_append": false, 00:24:06.392 "compare": false, 00:24:06.392 "compare_and_write": false, 00:24:06.392 "abort": true, 00:24:06.392 "seek_hole": false, 00:24:06.392 "seek_data": false, 00:24:06.392 "copy": true, 00:24:06.392 "nvme_iov_md": false 00:24:06.392 }, 00:24:06.392 "memory_domains": [ 00:24:06.392 { 00:24:06.392 "dma_device_id": "system", 00:24:06.392 "dma_device_type": 1 00:24:06.392 }, 00:24:06.392 { 00:24:06.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.392 "dma_device_type": 2 00:24:06.392 } 00:24:06.392 ], 00:24:06.392 "driver_specific": {} 00:24:06.392 } 00:24:06.392 ] 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.392 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.652 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.652 "name": "Existed_Raid", 00:24:06.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.652 "strip_size_kb": 64, 00:24:06.652 "state": "configuring", 00:24:06.652 "raid_level": "concat", 00:24:06.652 "superblock": false, 00:24:06.652 "num_base_bdevs": 4, 00:24:06.652 "num_base_bdevs_discovered": 3, 00:24:06.652 "num_base_bdevs_operational": 4, 00:24:06.652 "base_bdevs_list": [ 00:24:06.652 { 00:24:06.652 "name": "BaseBdev1", 00:24:06.652 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:06.652 "is_configured": true, 00:24:06.652 "data_offset": 0, 00:24:06.652 "data_size": 65536 00:24:06.652 }, 00:24:06.652 { 00:24:06.652 "name": null, 00:24:06.652 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:06.652 "is_configured": false, 00:24:06.652 "data_offset": 0, 00:24:06.652 "data_size": 65536 00:24:06.652 }, 00:24:06.652 { 00:24:06.652 "name": "BaseBdev3", 00:24:06.652 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:06.652 "is_configured": true, 00:24:06.652 "data_offset": 0, 00:24:06.652 "data_size": 65536 00:24:06.652 }, 00:24:06.652 { 00:24:06.652 "name": "BaseBdev4", 00:24:06.652 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:06.652 "is_configured": true, 00:24:06.652 "data_offset": 0, 00:24:06.652 "data_size": 65536 00:24:06.652 } 00:24:06.652 ] 00:24:06.652 }' 00:24:06.652 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.652 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.220 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.220 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:07.789 [2024-07-25 18:51:08.310643] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.789 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.048 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:08.048 "name": "Existed_Raid", 00:24:08.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.048 "strip_size_kb": 64, 00:24:08.048 "state": "configuring", 00:24:08.048 "raid_level": "concat", 00:24:08.048 "superblock": false, 00:24:08.048 "num_base_bdevs": 4, 00:24:08.048 "num_base_bdevs_discovered": 2, 00:24:08.048 "num_base_bdevs_operational": 4, 00:24:08.048 "base_bdevs_list": [ 00:24:08.048 { 00:24:08.048 "name": "BaseBdev1", 00:24:08.048 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:08.048 "is_configured": true, 00:24:08.048 "data_offset": 0, 00:24:08.048 "data_size": 65536 00:24:08.048 }, 00:24:08.048 { 00:24:08.048 "name": null, 00:24:08.048 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:08.048 "is_configured": false, 00:24:08.048 "data_offset": 0, 00:24:08.048 "data_size": 65536 00:24:08.048 }, 00:24:08.048 { 00:24:08.048 "name": null, 00:24:08.048 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:08.048 "is_configured": false, 00:24:08.048 "data_offset": 0, 00:24:08.048 "data_size": 65536 00:24:08.048 }, 00:24:08.048 { 00:24:08.048 "name": "BaseBdev4", 00:24:08.048 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:08.048 "is_configured": true, 00:24:08.048 "data_offset": 0, 00:24:08.048 "data_size": 65536 00:24:08.048 } 00:24:08.048 ] 00:24:08.048 }' 00:24:08.048 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:08.048 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.985 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.985 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:09.244 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:09.244 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:09.504 [2024-07-25 18:51:09.822470] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.504 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.763 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.763 "name": "Existed_Raid", 00:24:09.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.763 "strip_size_kb": 64, 00:24:09.763 "state": "configuring", 00:24:09.763 "raid_level": "concat", 00:24:09.763 "superblock": false, 00:24:09.763 "num_base_bdevs": 4, 00:24:09.763 "num_base_bdevs_discovered": 3, 00:24:09.763 "num_base_bdevs_operational": 4, 00:24:09.763 "base_bdevs_list": [ 00:24:09.763 { 00:24:09.763 "name": "BaseBdev1", 00:24:09.763 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:09.763 "is_configured": true, 00:24:09.763 "data_offset": 0, 00:24:09.763 "data_size": 65536 00:24:09.763 }, 00:24:09.763 { 00:24:09.763 "name": null, 00:24:09.763 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:09.763 "is_configured": false, 00:24:09.763 "data_offset": 0, 00:24:09.763 "data_size": 65536 00:24:09.763 }, 00:24:09.763 { 00:24:09.763 "name": "BaseBdev3", 00:24:09.763 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:09.763 "is_configured": true, 00:24:09.763 "data_offset": 0, 00:24:09.763 "data_size": 65536 00:24:09.763 }, 00:24:09.763 { 00:24:09.763 "name": "BaseBdev4", 00:24:09.763 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:09.763 "is_configured": true, 00:24:09.763 "data_offset": 0, 00:24:09.763 "data_size": 65536 00:24:09.763 } 00:24:09.763 ] 00:24:09.763 }' 00:24:09.763 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.763 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.331 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.331 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:10.590 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:10.590 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:10.590 [2024-07-25 18:51:11.126647] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:10.850 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.109 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.109 "name": "Existed_Raid", 00:24:11.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.109 "strip_size_kb": 64, 00:24:11.109 "state": "configuring", 00:24:11.109 "raid_level": "concat", 00:24:11.109 "superblock": false, 00:24:11.109 "num_base_bdevs": 4, 00:24:11.109 "num_base_bdevs_discovered": 2, 00:24:11.109 "num_base_bdevs_operational": 4, 00:24:11.109 "base_bdevs_list": [ 00:24:11.109 { 00:24:11.109 "name": null, 00:24:11.109 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:11.109 "is_configured": false, 00:24:11.109 "data_offset": 0, 00:24:11.109 "data_size": 65536 00:24:11.109 }, 00:24:11.109 { 00:24:11.109 "name": null, 00:24:11.109 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:11.109 "is_configured": false, 00:24:11.109 "data_offset": 0, 00:24:11.109 "data_size": 65536 00:24:11.109 }, 00:24:11.109 { 00:24:11.109 "name": "BaseBdev3", 00:24:11.109 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:11.109 "is_configured": true, 00:24:11.109 "data_offset": 0, 00:24:11.109 "data_size": 65536 00:24:11.109 }, 00:24:11.109 { 00:24:11.109 "name": "BaseBdev4", 00:24:11.109 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:11.109 "is_configured": true, 00:24:11.109 "data_offset": 0, 00:24:11.109 "data_size": 65536 00:24:11.109 } 00:24:11.109 ] 00:24:11.109 }' 00:24:11.109 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.109 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.676 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.676 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:11.935 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:11.935 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:12.194 [2024-07-25 18:51:12.510540] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.194 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.453 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.453 "name": "Existed_Raid", 00:24:12.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.453 "strip_size_kb": 64, 00:24:12.453 "state": "configuring", 00:24:12.453 "raid_level": "concat", 00:24:12.453 "superblock": false, 00:24:12.453 "num_base_bdevs": 4, 00:24:12.453 "num_base_bdevs_discovered": 3, 00:24:12.453 "num_base_bdevs_operational": 4, 00:24:12.453 "base_bdevs_list": [ 00:24:12.453 { 00:24:12.453 "name": null, 00:24:12.453 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:12.453 "is_configured": false, 00:24:12.453 "data_offset": 0, 00:24:12.453 "data_size": 65536 00:24:12.453 }, 00:24:12.453 { 00:24:12.453 "name": "BaseBdev2", 00:24:12.453 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:12.453 "is_configured": true, 00:24:12.453 "data_offset": 0, 00:24:12.453 "data_size": 65536 00:24:12.453 }, 00:24:12.453 { 00:24:12.453 "name": "BaseBdev3", 00:24:12.453 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:12.453 "is_configured": true, 00:24:12.453 "data_offset": 0, 00:24:12.453 "data_size": 65536 00:24:12.453 }, 00:24:12.453 { 00:24:12.453 "name": "BaseBdev4", 00:24:12.453 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:12.453 "is_configured": true, 00:24:12.453 "data_offset": 0, 00:24:12.453 "data_size": 65536 00:24:12.453 } 00:24:12.453 ] 00:24:12.453 }' 00:24:12.453 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.453 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.020 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.020 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:13.279 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:13.279 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:13.279 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.538 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3fbb8045-0ecb-4b27-b8d3-5d69a58be894 00:24:13.798 [2024-07-25 18:51:14.334148] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:13.798 [2024-07-25 18:51:14.334506] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:24:13.798 [2024-07-25 18:51:14.334555] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:13.798 [2024-07-25 18:51:14.334849] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:13.798 [2024-07-25 18:51:14.335412] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:24:13.798 [2024-07-25 18:51:14.335546] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:24:13.798 [2024-07-25 18:51:14.335914] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.798 NewBaseBdev 00:24:13.798 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:13.798 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:24:13.798 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:13.798 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:13.798 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:13.798 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:13.798 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:14.057 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:14.316 [ 00:24:14.316 { 00:24:14.316 "name": "NewBaseBdev", 00:24:14.316 "aliases": [ 00:24:14.317 "3fbb8045-0ecb-4b27-b8d3-5d69a58be894" 00:24:14.317 ], 00:24:14.317 "product_name": "Malloc disk", 00:24:14.317 "block_size": 512, 00:24:14.317 "num_blocks": 65536, 00:24:14.317 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:14.317 "assigned_rate_limits": { 00:24:14.317 "rw_ios_per_sec": 0, 00:24:14.317 "rw_mbytes_per_sec": 0, 00:24:14.317 "r_mbytes_per_sec": 0, 00:24:14.317 "w_mbytes_per_sec": 0 00:24:14.317 }, 00:24:14.317 "claimed": true, 00:24:14.317 "claim_type": "exclusive_write", 00:24:14.317 "zoned": false, 00:24:14.317 "supported_io_types": { 00:24:14.317 "read": true, 00:24:14.317 "write": true, 00:24:14.317 "unmap": true, 00:24:14.317 "flush": true, 00:24:14.317 "reset": true, 00:24:14.317 "nvme_admin": false, 00:24:14.317 "nvme_io": false, 00:24:14.317 "nvme_io_md": false, 00:24:14.317 "write_zeroes": true, 00:24:14.317 "zcopy": true, 00:24:14.317 "get_zone_info": false, 00:24:14.317 "zone_management": false, 00:24:14.317 "zone_append": false, 00:24:14.317 "compare": false, 00:24:14.317 "compare_and_write": false, 00:24:14.317 "abort": true, 00:24:14.317 "seek_hole": false, 00:24:14.317 "seek_data": false, 00:24:14.317 "copy": true, 00:24:14.317 "nvme_iov_md": false 00:24:14.317 }, 00:24:14.317 "memory_domains": [ 00:24:14.317 { 00:24:14.317 "dma_device_id": "system", 00:24:14.317 "dma_device_type": 1 00:24:14.317 }, 00:24:14.317 { 00:24:14.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.317 "dma_device_type": 2 00:24:14.317 } 00:24:14.317 ], 00:24:14.317 "driver_specific": {} 00:24:14.317 } 00:24:14.317 ] 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.317 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.576 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.576 "name": "Existed_Raid", 00:24:14.576 "uuid": "183172a3-7d39-4190-8cc3-5689e4712a32", 00:24:14.576 "strip_size_kb": 64, 00:24:14.576 "state": "online", 00:24:14.576 "raid_level": "concat", 00:24:14.576 "superblock": false, 00:24:14.576 "num_base_bdevs": 4, 00:24:14.576 "num_base_bdevs_discovered": 4, 00:24:14.576 "num_base_bdevs_operational": 4, 00:24:14.576 "base_bdevs_list": [ 00:24:14.576 { 00:24:14.576 "name": "NewBaseBdev", 00:24:14.576 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:14.576 "is_configured": true, 00:24:14.576 "data_offset": 0, 00:24:14.576 "data_size": 65536 00:24:14.576 }, 00:24:14.576 { 00:24:14.576 "name": "BaseBdev2", 00:24:14.576 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:14.576 "is_configured": true, 00:24:14.576 "data_offset": 0, 00:24:14.576 "data_size": 65536 00:24:14.576 }, 00:24:14.576 { 00:24:14.576 "name": "BaseBdev3", 00:24:14.576 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:14.576 "is_configured": true, 00:24:14.576 "data_offset": 0, 00:24:14.576 "data_size": 65536 00:24:14.576 }, 00:24:14.576 { 00:24:14.576 "name": "BaseBdev4", 00:24:14.576 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:14.576 "is_configured": true, 00:24:14.576 "data_offset": 0, 00:24:14.576 "data_size": 65536 00:24:14.576 } 00:24:14.576 ] 00:24:14.576 }' 00:24:14.576 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.576 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.145 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:15.145 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:15.145 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:15.145 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:15.145 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:15.145 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:15.404 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:15.404 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:15.663 [2024-07-25 18:51:15.986921] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:15.663 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:15.663 "name": "Existed_Raid", 00:24:15.663 "aliases": [ 00:24:15.663 "183172a3-7d39-4190-8cc3-5689e4712a32" 00:24:15.663 ], 00:24:15.663 "product_name": "Raid Volume", 00:24:15.663 "block_size": 512, 00:24:15.663 "num_blocks": 262144, 00:24:15.663 "uuid": "183172a3-7d39-4190-8cc3-5689e4712a32", 00:24:15.663 "assigned_rate_limits": { 00:24:15.663 "rw_ios_per_sec": 0, 00:24:15.663 "rw_mbytes_per_sec": 0, 00:24:15.663 "r_mbytes_per_sec": 0, 00:24:15.663 "w_mbytes_per_sec": 0 00:24:15.663 }, 00:24:15.663 "claimed": false, 00:24:15.663 "zoned": false, 00:24:15.663 "supported_io_types": { 00:24:15.663 "read": true, 00:24:15.663 "write": true, 00:24:15.663 "unmap": true, 00:24:15.663 "flush": true, 00:24:15.663 "reset": true, 00:24:15.663 "nvme_admin": false, 00:24:15.663 "nvme_io": false, 00:24:15.663 "nvme_io_md": false, 00:24:15.663 "write_zeroes": true, 00:24:15.663 "zcopy": false, 00:24:15.663 "get_zone_info": false, 00:24:15.664 "zone_management": false, 00:24:15.664 "zone_append": false, 00:24:15.664 "compare": false, 00:24:15.664 "compare_and_write": false, 00:24:15.664 "abort": false, 00:24:15.664 "seek_hole": false, 00:24:15.664 "seek_data": false, 00:24:15.664 "copy": false, 00:24:15.664 "nvme_iov_md": false 00:24:15.664 }, 00:24:15.664 "memory_domains": [ 00:24:15.664 { 00:24:15.664 "dma_device_id": "system", 00:24:15.664 "dma_device_type": 1 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.664 "dma_device_type": 2 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "dma_device_id": "system", 00:24:15.664 "dma_device_type": 1 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.664 "dma_device_type": 2 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "dma_device_id": "system", 00:24:15.664 "dma_device_type": 1 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.664 "dma_device_type": 2 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "dma_device_id": "system", 00:24:15.664 "dma_device_type": 1 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.664 "dma_device_type": 2 00:24:15.664 } 00:24:15.664 ], 00:24:15.664 "driver_specific": { 00:24:15.664 "raid": { 00:24:15.664 "uuid": "183172a3-7d39-4190-8cc3-5689e4712a32", 00:24:15.664 "strip_size_kb": 64, 00:24:15.664 "state": "online", 00:24:15.664 "raid_level": "concat", 00:24:15.664 "superblock": false, 00:24:15.664 "num_base_bdevs": 4, 00:24:15.664 "num_base_bdevs_discovered": 4, 00:24:15.664 "num_base_bdevs_operational": 4, 00:24:15.664 "base_bdevs_list": [ 00:24:15.664 { 00:24:15.664 "name": "NewBaseBdev", 00:24:15.664 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:15.664 "is_configured": true, 00:24:15.664 "data_offset": 0, 00:24:15.664 "data_size": 65536 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "name": "BaseBdev2", 00:24:15.664 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:15.664 "is_configured": true, 00:24:15.664 "data_offset": 0, 00:24:15.664 "data_size": 65536 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "name": "BaseBdev3", 00:24:15.664 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:15.664 "is_configured": true, 00:24:15.664 "data_offset": 0, 00:24:15.664 "data_size": 65536 00:24:15.664 }, 00:24:15.664 { 00:24:15.664 "name": "BaseBdev4", 00:24:15.664 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:15.664 "is_configured": true, 00:24:15.664 "data_offset": 0, 00:24:15.664 "data_size": 65536 00:24:15.664 } 00:24:15.664 ] 00:24:15.664 } 00:24:15.664 } 00:24:15.664 }' 00:24:15.664 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:15.664 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:15.664 BaseBdev2 00:24:15.664 BaseBdev3 00:24:15.664 BaseBdev4' 00:24:15.664 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:15.664 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:15.664 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:15.923 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:15.923 "name": "NewBaseBdev", 00:24:15.923 "aliases": [ 00:24:15.923 "3fbb8045-0ecb-4b27-b8d3-5d69a58be894" 00:24:15.923 ], 00:24:15.923 "product_name": "Malloc disk", 00:24:15.923 "block_size": 512, 00:24:15.923 "num_blocks": 65536, 00:24:15.923 "uuid": "3fbb8045-0ecb-4b27-b8d3-5d69a58be894", 00:24:15.923 "assigned_rate_limits": { 00:24:15.923 "rw_ios_per_sec": 0, 00:24:15.923 "rw_mbytes_per_sec": 0, 00:24:15.923 "r_mbytes_per_sec": 0, 00:24:15.923 "w_mbytes_per_sec": 0 00:24:15.923 }, 00:24:15.923 "claimed": true, 00:24:15.923 "claim_type": "exclusive_write", 00:24:15.923 "zoned": false, 00:24:15.923 "supported_io_types": { 00:24:15.923 "read": true, 00:24:15.923 "write": true, 00:24:15.923 "unmap": true, 00:24:15.923 "flush": true, 00:24:15.923 "reset": true, 00:24:15.923 "nvme_admin": false, 00:24:15.923 "nvme_io": false, 00:24:15.923 "nvme_io_md": false, 00:24:15.923 "write_zeroes": true, 00:24:15.923 "zcopy": true, 00:24:15.923 "get_zone_info": false, 00:24:15.923 "zone_management": false, 00:24:15.923 "zone_append": false, 00:24:15.923 "compare": false, 00:24:15.923 "compare_and_write": false, 00:24:15.923 "abort": true, 00:24:15.923 "seek_hole": false, 00:24:15.923 "seek_data": false, 00:24:15.923 "copy": true, 00:24:15.923 "nvme_iov_md": false 00:24:15.923 }, 00:24:15.923 "memory_domains": [ 00:24:15.923 { 00:24:15.923 "dma_device_id": "system", 00:24:15.923 "dma_device_type": 1 00:24:15.923 }, 00:24:15.923 { 00:24:15.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.923 "dma_device_type": 2 00:24:15.923 } 00:24:15.923 ], 00:24:15.924 "driver_specific": {} 00:24:15.924 }' 00:24:15.924 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.924 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.924 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:15.924 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.924 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.924 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:15.924 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:16.183 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:16.442 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:16.442 "name": "BaseBdev2", 00:24:16.442 "aliases": [ 00:24:16.442 "d3d4a164-e1c3-4af3-b2bf-33736d962229" 00:24:16.442 ], 00:24:16.442 "product_name": "Malloc disk", 00:24:16.442 "block_size": 512, 00:24:16.442 "num_blocks": 65536, 00:24:16.442 "uuid": "d3d4a164-e1c3-4af3-b2bf-33736d962229", 00:24:16.442 "assigned_rate_limits": { 00:24:16.442 "rw_ios_per_sec": 0, 00:24:16.442 "rw_mbytes_per_sec": 0, 00:24:16.442 "r_mbytes_per_sec": 0, 00:24:16.442 "w_mbytes_per_sec": 0 00:24:16.442 }, 00:24:16.442 "claimed": true, 00:24:16.442 "claim_type": "exclusive_write", 00:24:16.442 "zoned": false, 00:24:16.442 "supported_io_types": { 00:24:16.442 "read": true, 00:24:16.442 "write": true, 00:24:16.442 "unmap": true, 00:24:16.442 "flush": true, 00:24:16.442 "reset": true, 00:24:16.442 "nvme_admin": false, 00:24:16.442 "nvme_io": false, 00:24:16.442 "nvme_io_md": false, 00:24:16.442 "write_zeroes": true, 00:24:16.442 "zcopy": true, 00:24:16.442 "get_zone_info": false, 00:24:16.442 "zone_management": false, 00:24:16.442 "zone_append": false, 00:24:16.442 "compare": false, 00:24:16.442 "compare_and_write": false, 00:24:16.442 "abort": true, 00:24:16.442 "seek_hole": false, 00:24:16.442 "seek_data": false, 00:24:16.442 "copy": true, 00:24:16.442 "nvme_iov_md": false 00:24:16.442 }, 00:24:16.442 "memory_domains": [ 00:24:16.442 { 00:24:16.442 "dma_device_id": "system", 00:24:16.442 "dma_device_type": 1 00:24:16.442 }, 00:24:16.442 { 00:24:16.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.442 "dma_device_type": 2 00:24:16.442 } 00:24:16.442 ], 00:24:16.442 "driver_specific": {} 00:24:16.442 }' 00:24:16.442 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:16.701 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.960 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.960 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:16.960 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:16.960 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:16.960 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:17.219 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:17.219 "name": "BaseBdev3", 00:24:17.219 "aliases": [ 00:24:17.219 "cac3bd14-4648-46f4-a4e7-06562ea3e0cb" 00:24:17.219 ], 00:24:17.219 "product_name": "Malloc disk", 00:24:17.219 "block_size": 512, 00:24:17.219 "num_blocks": 65536, 00:24:17.219 "uuid": "cac3bd14-4648-46f4-a4e7-06562ea3e0cb", 00:24:17.219 "assigned_rate_limits": { 00:24:17.219 "rw_ios_per_sec": 0, 00:24:17.219 "rw_mbytes_per_sec": 0, 00:24:17.219 "r_mbytes_per_sec": 0, 00:24:17.219 "w_mbytes_per_sec": 0 00:24:17.219 }, 00:24:17.219 "claimed": true, 00:24:17.219 "claim_type": "exclusive_write", 00:24:17.219 "zoned": false, 00:24:17.219 "supported_io_types": { 00:24:17.219 "read": true, 00:24:17.219 "write": true, 00:24:17.219 "unmap": true, 00:24:17.219 "flush": true, 00:24:17.219 "reset": true, 00:24:17.219 "nvme_admin": false, 00:24:17.219 "nvme_io": false, 00:24:17.219 "nvme_io_md": false, 00:24:17.219 "write_zeroes": true, 00:24:17.219 "zcopy": true, 00:24:17.219 "get_zone_info": false, 00:24:17.219 "zone_management": false, 00:24:17.219 "zone_append": false, 00:24:17.219 "compare": false, 00:24:17.219 "compare_and_write": false, 00:24:17.219 "abort": true, 00:24:17.219 "seek_hole": false, 00:24:17.219 "seek_data": false, 00:24:17.219 "copy": true, 00:24:17.219 "nvme_iov_md": false 00:24:17.219 }, 00:24:17.219 "memory_domains": [ 00:24:17.219 { 00:24:17.219 "dma_device_id": "system", 00:24:17.219 "dma_device_type": 1 00:24:17.219 }, 00:24:17.219 { 00:24:17.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.219 "dma_device_type": 2 00:24:17.219 } 00:24:17.219 ], 00:24:17.219 "driver_specific": {} 00:24:17.219 }' 00:24:17.219 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:17.219 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:17.219 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:17.219 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:17.484 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:17.484 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:17.484 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:17.484 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:17.484 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:17.484 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:17.484 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:17.484 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:17.484 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:17.484 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:17.484 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:17.786 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:17.786 "name": "BaseBdev4", 00:24:17.786 "aliases": [ 00:24:17.786 "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43" 00:24:17.786 ], 00:24:17.786 "product_name": "Malloc disk", 00:24:17.786 "block_size": 512, 00:24:17.786 "num_blocks": 65536, 00:24:17.786 "uuid": "7922f9f3-91a1-47e6-acb7-cc8d9ca43f43", 00:24:17.786 "assigned_rate_limits": { 00:24:17.786 "rw_ios_per_sec": 0, 00:24:17.786 "rw_mbytes_per_sec": 0, 00:24:17.786 "r_mbytes_per_sec": 0, 00:24:17.786 "w_mbytes_per_sec": 0 00:24:17.786 }, 00:24:17.786 "claimed": true, 00:24:17.786 "claim_type": "exclusive_write", 00:24:17.786 "zoned": false, 00:24:17.786 "supported_io_types": { 00:24:17.786 "read": true, 00:24:17.786 "write": true, 00:24:17.786 "unmap": true, 00:24:17.786 "flush": true, 00:24:17.786 "reset": true, 00:24:17.786 "nvme_admin": false, 00:24:17.786 "nvme_io": false, 00:24:17.786 "nvme_io_md": false, 00:24:17.786 "write_zeroes": true, 00:24:17.786 "zcopy": true, 00:24:17.786 "get_zone_info": false, 00:24:17.786 "zone_management": false, 00:24:17.786 "zone_append": false, 00:24:17.786 "compare": false, 00:24:17.786 "compare_and_write": false, 00:24:17.786 "abort": true, 00:24:17.786 "seek_hole": false, 00:24:17.786 "seek_data": false, 00:24:17.786 "copy": true, 00:24:17.786 "nvme_iov_md": false 00:24:17.786 }, 00:24:17.786 "memory_domains": [ 00:24:17.786 { 00:24:17.786 "dma_device_id": "system", 00:24:17.786 "dma_device_type": 1 00:24:17.786 }, 00:24:17.786 { 00:24:17.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.786 "dma_device_type": 2 00:24:17.786 } 00:24:17.786 ], 00:24:17.786 "driver_specific": {} 00:24:17.786 }' 00:24:17.786 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:18.048 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:18.307 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:18.307 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:18.566 [2024-07-25 18:51:18.910536] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:18.566 [2024-07-25 18:51:18.910837] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:18.566 [2024-07-25 18:51:18.911110] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:18.566 [2024-07-25 18:51:18.911307] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:18.566 [2024-07-25 18:51:18.911401] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 136834 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 136834 ']' 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 136834 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136834 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136834' 00:24:18.566 killing process with pid 136834 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 136834 00:24:18.566 [2024-07-25 18:51:18.966914] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:18.566 18:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 136834 00:24:18.826 [2024-07-25 18:51:19.335130] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:20.204 ************************************ 00:24:20.204 END TEST raid_state_function_test 00:24:20.204 ************************************ 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:20.204 00:24:20.204 real 0m33.597s 00:24:20.204 user 1m0.262s 00:24:20.204 sys 0m5.639s 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.204 18:51:20 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:24:20.204 18:51:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:20.204 18:51:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.204 18:51:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:20.204 ************************************ 00:24:20.204 START TEST raid_state_function_test_sb 00:24:20.204 ************************************ 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:20.204 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:20.205 Process raid pid: 137931 00:24:20.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=137931 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 137931' 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 137931 /var/tmp/spdk-raid.sock 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 137931 ']' 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.205 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.464 [2024-07-25 18:51:20.782668] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:20.464 [2024-07-25 18:51:20.783181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.464 [2024-07-25 18:51:20.981172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.723 [2024-07-25 18:51:21.233842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.982 [2024-07-25 18:51:21.431194] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.242 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.242 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:21.242 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:21.501 [2024-07-25 18:51:22.026546] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:21.501 [2024-07-25 18:51:22.026882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:21.501 [2024-07-25 18:51:22.026989] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.501 [2024-07-25 18:51:22.027050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.501 [2024-07-25 18:51:22.027117] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:21.501 [2024-07-25 18:51:22.027164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:21.501 [2024-07-25 18:51:22.027190] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:21.501 [2024-07-25 18:51:22.027274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.501 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.760 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:21.760 "name": "Existed_Raid", 00:24:21.760 "uuid": "bef550fc-a937-4e09-b330-e664bc9439b8", 00:24:21.760 "strip_size_kb": 64, 00:24:21.760 "state": "configuring", 00:24:21.760 "raid_level": "concat", 00:24:21.760 "superblock": true, 00:24:21.760 "num_base_bdevs": 4, 00:24:21.760 "num_base_bdevs_discovered": 0, 00:24:21.760 "num_base_bdevs_operational": 4, 00:24:21.760 "base_bdevs_list": [ 00:24:21.760 { 00:24:21.760 "name": "BaseBdev1", 00:24:21.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.760 "is_configured": false, 00:24:21.760 "data_offset": 0, 00:24:21.760 "data_size": 0 00:24:21.760 }, 00:24:21.760 { 00:24:21.760 "name": "BaseBdev2", 00:24:21.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.760 "is_configured": false, 00:24:21.760 "data_offset": 0, 00:24:21.760 "data_size": 0 00:24:21.760 }, 00:24:21.760 { 00:24:21.760 "name": "BaseBdev3", 00:24:21.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.760 "is_configured": false, 00:24:21.760 "data_offset": 0, 00:24:21.760 "data_size": 0 00:24:21.760 }, 00:24:21.760 { 00:24:21.760 "name": "BaseBdev4", 00:24:21.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.760 "is_configured": false, 00:24:21.760 "data_offset": 0, 00:24:21.760 "data_size": 0 00:24:21.760 } 00:24:21.760 ] 00:24:21.760 }' 00:24:21.760 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:21.760 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.329 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:22.587 [2024-07-25 18:51:23.018626] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:22.587 [2024-07-25 18:51:23.018863] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:24:22.587 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:22.846 [2024-07-25 18:51:23.282685] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:22.846 [2024-07-25 18:51:23.282942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:22.846 [2024-07-25 18:51:23.283059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:22.846 [2024-07-25 18:51:23.283143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:22.846 [2024-07-25 18:51:23.283322] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:22.846 [2024-07-25 18:51:23.283388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:22.846 [2024-07-25 18:51:23.283414] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:22.846 [2024-07-25 18:51:23.283457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:22.846 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:23.105 [2024-07-25 18:51:23.542867] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:23.105 BaseBdev1 00:24:23.105 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:23.105 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:23.105 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:23.105 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:23.105 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:23.105 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:23.105 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:23.363 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:23.622 [ 00:24:23.622 { 00:24:23.622 "name": "BaseBdev1", 00:24:23.622 "aliases": [ 00:24:23.622 "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946" 00:24:23.622 ], 00:24:23.622 "product_name": "Malloc disk", 00:24:23.622 "block_size": 512, 00:24:23.622 "num_blocks": 65536, 00:24:23.622 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:23.622 "assigned_rate_limits": { 00:24:23.622 "rw_ios_per_sec": 0, 00:24:23.622 "rw_mbytes_per_sec": 0, 00:24:23.622 "r_mbytes_per_sec": 0, 00:24:23.622 "w_mbytes_per_sec": 0 00:24:23.622 }, 00:24:23.622 "claimed": true, 00:24:23.622 "claim_type": "exclusive_write", 00:24:23.622 "zoned": false, 00:24:23.622 "supported_io_types": { 00:24:23.622 "read": true, 00:24:23.622 "write": true, 00:24:23.622 "unmap": true, 00:24:23.622 "flush": true, 00:24:23.622 "reset": true, 00:24:23.622 "nvme_admin": false, 00:24:23.622 "nvme_io": false, 00:24:23.622 "nvme_io_md": false, 00:24:23.622 "write_zeroes": true, 00:24:23.622 "zcopy": true, 00:24:23.622 "get_zone_info": false, 00:24:23.622 "zone_management": false, 00:24:23.622 "zone_append": false, 00:24:23.622 "compare": false, 00:24:23.622 "compare_and_write": false, 00:24:23.622 "abort": true, 00:24:23.622 "seek_hole": false, 00:24:23.622 "seek_data": false, 00:24:23.622 "copy": true, 00:24:23.622 "nvme_iov_md": false 00:24:23.622 }, 00:24:23.622 "memory_domains": [ 00:24:23.622 { 00:24:23.622 "dma_device_id": "system", 00:24:23.622 "dma_device_type": 1 00:24:23.622 }, 00:24:23.622 { 00:24:23.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.622 "dma_device_type": 2 00:24:23.622 } 00:24:23.622 ], 00:24:23.622 "driver_specific": {} 00:24:23.622 } 00:24:23.622 ] 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.622 "name": "Existed_Raid", 00:24:23.622 "uuid": "0b7d1412-da88-4ca7-a238-95b452d183ce", 00:24:23.622 "strip_size_kb": 64, 00:24:23.622 "state": "configuring", 00:24:23.622 "raid_level": "concat", 00:24:23.622 "superblock": true, 00:24:23.622 "num_base_bdevs": 4, 00:24:23.622 "num_base_bdevs_discovered": 1, 00:24:23.622 "num_base_bdevs_operational": 4, 00:24:23.622 "base_bdevs_list": [ 00:24:23.622 { 00:24:23.622 "name": "BaseBdev1", 00:24:23.622 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:23.622 "is_configured": true, 00:24:23.622 "data_offset": 2048, 00:24:23.622 "data_size": 63488 00:24:23.622 }, 00:24:23.622 { 00:24:23.622 "name": "BaseBdev2", 00:24:23.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.622 "is_configured": false, 00:24:23.622 "data_offset": 0, 00:24:23.622 "data_size": 0 00:24:23.622 }, 00:24:23.622 { 00:24:23.622 "name": "BaseBdev3", 00:24:23.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.622 "is_configured": false, 00:24:23.622 "data_offset": 0, 00:24:23.622 "data_size": 0 00:24:23.622 }, 00:24:23.622 { 00:24:23.622 "name": "BaseBdev4", 00:24:23.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.622 "is_configured": false, 00:24:23.622 "data_offset": 0, 00:24:23.622 "data_size": 0 00:24:23.622 } 00:24:23.622 ] 00:24:23.622 }' 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.622 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.190 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:24.449 [2024-07-25 18:51:24.903161] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:24.449 [2024-07-25 18:51:24.903372] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:24:24.449 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:24.708 [2024-07-25 18:51:25.079269] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:24.708 [2024-07-25 18:51:25.081665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:24.708 [2024-07-25 18:51:25.081851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:24.708 [2024-07-25 18:51:25.081936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:24.708 [2024-07-25 18:51:25.081996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:24.708 [2024-07-25 18:51:25.082024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:24.708 [2024-07-25 18:51:25.082061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.708 "name": "Existed_Raid", 00:24:24.708 "uuid": "75212252-e2fe-4570-a993-b2757686952e", 00:24:24.708 "strip_size_kb": 64, 00:24:24.708 "state": "configuring", 00:24:24.708 "raid_level": "concat", 00:24:24.708 "superblock": true, 00:24:24.708 "num_base_bdevs": 4, 00:24:24.708 "num_base_bdevs_discovered": 1, 00:24:24.708 "num_base_bdevs_operational": 4, 00:24:24.708 "base_bdevs_list": [ 00:24:24.708 { 00:24:24.708 "name": "BaseBdev1", 00:24:24.708 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:24.708 "is_configured": true, 00:24:24.708 "data_offset": 2048, 00:24:24.708 "data_size": 63488 00:24:24.708 }, 00:24:24.708 { 00:24:24.708 "name": "BaseBdev2", 00:24:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.708 "is_configured": false, 00:24:24.708 "data_offset": 0, 00:24:24.708 "data_size": 0 00:24:24.708 }, 00:24:24.708 { 00:24:24.708 "name": "BaseBdev3", 00:24:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.708 "is_configured": false, 00:24:24.708 "data_offset": 0, 00:24:24.708 "data_size": 0 00:24:24.708 }, 00:24:24.708 { 00:24:24.708 "name": "BaseBdev4", 00:24:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.708 "is_configured": false, 00:24:24.708 "data_offset": 0, 00:24:24.708 "data_size": 0 00:24:24.708 } 00:24:24.708 ] 00:24:24.708 }' 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.708 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.276 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:25.534 [2024-07-25 18:51:26.083728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:25.534 BaseBdev2 00:24:25.534 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:25.534 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:25.534 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:25.534 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:25.534 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:25.534 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:25.534 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:25.792 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:26.051 [ 00:24:26.051 { 00:24:26.051 "name": "BaseBdev2", 00:24:26.051 "aliases": [ 00:24:26.051 "07e5b167-415d-4419-9a26-e16bfa1b4cd9" 00:24:26.051 ], 00:24:26.051 "product_name": "Malloc disk", 00:24:26.051 "block_size": 512, 00:24:26.051 "num_blocks": 65536, 00:24:26.051 "uuid": "07e5b167-415d-4419-9a26-e16bfa1b4cd9", 00:24:26.051 "assigned_rate_limits": { 00:24:26.051 "rw_ios_per_sec": 0, 00:24:26.051 "rw_mbytes_per_sec": 0, 00:24:26.051 "r_mbytes_per_sec": 0, 00:24:26.051 "w_mbytes_per_sec": 0 00:24:26.051 }, 00:24:26.051 "claimed": true, 00:24:26.051 "claim_type": "exclusive_write", 00:24:26.051 "zoned": false, 00:24:26.051 "supported_io_types": { 00:24:26.051 "read": true, 00:24:26.051 "write": true, 00:24:26.051 "unmap": true, 00:24:26.051 "flush": true, 00:24:26.051 "reset": true, 00:24:26.051 "nvme_admin": false, 00:24:26.051 "nvme_io": false, 00:24:26.051 "nvme_io_md": false, 00:24:26.051 "write_zeroes": true, 00:24:26.051 "zcopy": true, 00:24:26.051 "get_zone_info": false, 00:24:26.051 "zone_management": false, 00:24:26.051 "zone_append": false, 00:24:26.051 "compare": false, 00:24:26.051 "compare_and_write": false, 00:24:26.051 "abort": true, 00:24:26.051 "seek_hole": false, 00:24:26.051 "seek_data": false, 00:24:26.051 "copy": true, 00:24:26.051 "nvme_iov_md": false 00:24:26.051 }, 00:24:26.051 "memory_domains": [ 00:24:26.051 { 00:24:26.051 "dma_device_id": "system", 00:24:26.051 "dma_device_type": 1 00:24:26.051 }, 00:24:26.051 { 00:24:26.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.051 "dma_device_type": 2 00:24:26.051 } 00:24:26.051 ], 00:24:26.051 "driver_specific": {} 00:24:26.051 } 00:24:26.051 ] 00:24:26.051 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:26.051 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:26.051 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:26.051 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:26.051 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.051 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.051 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.052 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.311 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.311 "name": "Existed_Raid", 00:24:26.311 "uuid": "75212252-e2fe-4570-a993-b2757686952e", 00:24:26.311 "strip_size_kb": 64, 00:24:26.311 "state": "configuring", 00:24:26.311 "raid_level": "concat", 00:24:26.311 "superblock": true, 00:24:26.311 "num_base_bdevs": 4, 00:24:26.311 "num_base_bdevs_discovered": 2, 00:24:26.311 "num_base_bdevs_operational": 4, 00:24:26.311 "base_bdevs_list": [ 00:24:26.311 { 00:24:26.311 "name": "BaseBdev1", 00:24:26.311 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:26.311 "is_configured": true, 00:24:26.311 "data_offset": 2048, 00:24:26.311 "data_size": 63488 00:24:26.311 }, 00:24:26.311 { 00:24:26.311 "name": "BaseBdev2", 00:24:26.311 "uuid": "07e5b167-415d-4419-9a26-e16bfa1b4cd9", 00:24:26.311 "is_configured": true, 00:24:26.311 "data_offset": 2048, 00:24:26.311 "data_size": 63488 00:24:26.311 }, 00:24:26.311 { 00:24:26.311 "name": "BaseBdev3", 00:24:26.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.311 "is_configured": false, 00:24:26.311 "data_offset": 0, 00:24:26.311 "data_size": 0 00:24:26.311 }, 00:24:26.311 { 00:24:26.311 "name": "BaseBdev4", 00:24:26.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.311 "is_configured": false, 00:24:26.311 "data_offset": 0, 00:24:26.311 "data_size": 0 00:24:26.311 } 00:24:26.311 ] 00:24:26.311 }' 00:24:26.311 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.311 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.570 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:26.829 [2024-07-25 18:51:27.348999] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.829 BaseBdev3 00:24:26.829 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:26.829 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:26.829 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:26.829 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:26.829 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:26.829 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:26.829 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:27.087 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:27.347 [ 00:24:27.347 { 00:24:27.347 "name": "BaseBdev3", 00:24:27.347 "aliases": [ 00:24:27.347 "28049dcb-a1c7-4577-8492-a58f47e4653b" 00:24:27.347 ], 00:24:27.347 "product_name": "Malloc disk", 00:24:27.347 "block_size": 512, 00:24:27.347 "num_blocks": 65536, 00:24:27.347 "uuid": "28049dcb-a1c7-4577-8492-a58f47e4653b", 00:24:27.347 "assigned_rate_limits": { 00:24:27.347 "rw_ios_per_sec": 0, 00:24:27.347 "rw_mbytes_per_sec": 0, 00:24:27.347 "r_mbytes_per_sec": 0, 00:24:27.347 "w_mbytes_per_sec": 0 00:24:27.347 }, 00:24:27.347 "claimed": true, 00:24:27.347 "claim_type": "exclusive_write", 00:24:27.347 "zoned": false, 00:24:27.347 "supported_io_types": { 00:24:27.347 "read": true, 00:24:27.347 "write": true, 00:24:27.347 "unmap": true, 00:24:27.347 "flush": true, 00:24:27.347 "reset": true, 00:24:27.347 "nvme_admin": false, 00:24:27.347 "nvme_io": false, 00:24:27.347 "nvme_io_md": false, 00:24:27.347 "write_zeroes": true, 00:24:27.347 "zcopy": true, 00:24:27.347 "get_zone_info": false, 00:24:27.347 "zone_management": false, 00:24:27.347 "zone_append": false, 00:24:27.347 "compare": false, 00:24:27.347 "compare_and_write": false, 00:24:27.347 "abort": true, 00:24:27.347 "seek_hole": false, 00:24:27.347 "seek_data": false, 00:24:27.347 "copy": true, 00:24:27.347 "nvme_iov_md": false 00:24:27.347 }, 00:24:27.347 "memory_domains": [ 00:24:27.347 { 00:24:27.347 "dma_device_id": "system", 00:24:27.347 "dma_device_type": 1 00:24:27.347 }, 00:24:27.347 { 00:24:27.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.347 "dma_device_type": 2 00:24:27.347 } 00:24:27.347 ], 00:24:27.347 "driver_specific": {} 00:24:27.347 } 00:24:27.347 ] 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.347 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.606 18:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.606 "name": "Existed_Raid", 00:24:27.606 "uuid": "75212252-e2fe-4570-a993-b2757686952e", 00:24:27.606 "strip_size_kb": 64, 00:24:27.606 "state": "configuring", 00:24:27.606 "raid_level": "concat", 00:24:27.606 "superblock": true, 00:24:27.606 "num_base_bdevs": 4, 00:24:27.606 "num_base_bdevs_discovered": 3, 00:24:27.606 "num_base_bdevs_operational": 4, 00:24:27.606 "base_bdevs_list": [ 00:24:27.606 { 00:24:27.606 "name": "BaseBdev1", 00:24:27.606 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:27.606 "is_configured": true, 00:24:27.606 "data_offset": 2048, 00:24:27.606 "data_size": 63488 00:24:27.606 }, 00:24:27.606 { 00:24:27.606 "name": "BaseBdev2", 00:24:27.606 "uuid": "07e5b167-415d-4419-9a26-e16bfa1b4cd9", 00:24:27.606 "is_configured": true, 00:24:27.606 "data_offset": 2048, 00:24:27.606 "data_size": 63488 00:24:27.606 }, 00:24:27.606 { 00:24:27.606 "name": "BaseBdev3", 00:24:27.606 "uuid": "28049dcb-a1c7-4577-8492-a58f47e4653b", 00:24:27.606 "is_configured": true, 00:24:27.606 "data_offset": 2048, 00:24:27.606 "data_size": 63488 00:24:27.606 }, 00:24:27.606 { 00:24:27.606 "name": "BaseBdev4", 00:24:27.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.606 "is_configured": false, 00:24:27.606 "data_offset": 0, 00:24:27.606 "data_size": 0 00:24:27.606 } 00:24:27.606 ] 00:24:27.606 }' 00:24:27.606 18:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.606 18:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.173 18:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:28.432 [2024-07-25 18:51:28.813528] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:28.432 [2024-07-25 18:51:28.814112] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:24:28.432 [2024-07-25 18:51:28.814235] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:28.432 [2024-07-25 18:51:28.814395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:28.432 [2024-07-25 18:51:28.814802] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:24:28.432 [2024-07-25 18:51:28.814856] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:24:28.432 BaseBdev4 00:24:28.432 [2024-07-25 18:51:28.815159] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.432 18:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:28.432 18:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:28.432 18:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:28.432 18:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:28.432 18:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:28.432 18:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:28.432 18:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:28.691 [ 00:24:28.691 { 00:24:28.691 "name": "BaseBdev4", 00:24:28.691 "aliases": [ 00:24:28.691 "89c08bb5-d85a-40a5-8012-48411ad88aa2" 00:24:28.691 ], 00:24:28.691 "product_name": "Malloc disk", 00:24:28.691 "block_size": 512, 00:24:28.691 "num_blocks": 65536, 00:24:28.691 "uuid": "89c08bb5-d85a-40a5-8012-48411ad88aa2", 00:24:28.691 "assigned_rate_limits": { 00:24:28.691 "rw_ios_per_sec": 0, 00:24:28.691 "rw_mbytes_per_sec": 0, 00:24:28.691 "r_mbytes_per_sec": 0, 00:24:28.691 "w_mbytes_per_sec": 0 00:24:28.691 }, 00:24:28.691 "claimed": true, 00:24:28.691 "claim_type": "exclusive_write", 00:24:28.691 "zoned": false, 00:24:28.691 "supported_io_types": { 00:24:28.691 "read": true, 00:24:28.691 "write": true, 00:24:28.691 "unmap": true, 00:24:28.691 "flush": true, 00:24:28.691 "reset": true, 00:24:28.691 "nvme_admin": false, 00:24:28.691 "nvme_io": false, 00:24:28.691 "nvme_io_md": false, 00:24:28.691 "write_zeroes": true, 00:24:28.691 "zcopy": true, 00:24:28.691 "get_zone_info": false, 00:24:28.691 "zone_management": false, 00:24:28.691 "zone_append": false, 00:24:28.691 "compare": false, 00:24:28.691 "compare_and_write": false, 00:24:28.691 "abort": true, 00:24:28.691 "seek_hole": false, 00:24:28.691 "seek_data": false, 00:24:28.691 "copy": true, 00:24:28.691 "nvme_iov_md": false 00:24:28.691 }, 00:24:28.691 "memory_domains": [ 00:24:28.691 { 00:24:28.691 "dma_device_id": "system", 00:24:28.691 "dma_device_type": 1 00:24:28.691 }, 00:24:28.691 { 00:24:28.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.691 "dma_device_type": 2 00:24:28.691 } 00:24:28.691 ], 00:24:28.691 "driver_specific": {} 00:24:28.691 } 00:24:28.691 ] 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.691 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.951 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.951 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.951 "name": "Existed_Raid", 00:24:28.951 "uuid": "75212252-e2fe-4570-a993-b2757686952e", 00:24:28.951 "strip_size_kb": 64, 00:24:28.951 "state": "online", 00:24:28.951 "raid_level": "concat", 00:24:28.951 "superblock": true, 00:24:28.951 "num_base_bdevs": 4, 00:24:28.951 "num_base_bdevs_discovered": 4, 00:24:28.951 "num_base_bdevs_operational": 4, 00:24:28.951 "base_bdevs_list": [ 00:24:28.951 { 00:24:28.951 "name": "BaseBdev1", 00:24:28.951 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:28.951 "is_configured": true, 00:24:28.951 "data_offset": 2048, 00:24:28.951 "data_size": 63488 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "name": "BaseBdev2", 00:24:28.951 "uuid": "07e5b167-415d-4419-9a26-e16bfa1b4cd9", 00:24:28.951 "is_configured": true, 00:24:28.951 "data_offset": 2048, 00:24:28.951 "data_size": 63488 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "name": "BaseBdev3", 00:24:28.951 "uuid": "28049dcb-a1c7-4577-8492-a58f47e4653b", 00:24:28.951 "is_configured": true, 00:24:28.951 "data_offset": 2048, 00:24:28.951 "data_size": 63488 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "name": "BaseBdev4", 00:24:28.951 "uuid": "89c08bb5-d85a-40a5-8012-48411ad88aa2", 00:24:28.951 "is_configured": true, 00:24:28.951 "data_offset": 2048, 00:24:28.951 "data_size": 63488 00:24:28.951 } 00:24:28.951 ] 00:24:28.951 }' 00:24:28.951 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.951 18:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:29.519 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:29.778 [2024-07-25 18:51:30.210058] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.778 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:29.778 "name": "Existed_Raid", 00:24:29.778 "aliases": [ 00:24:29.778 "75212252-e2fe-4570-a993-b2757686952e" 00:24:29.778 ], 00:24:29.778 "product_name": "Raid Volume", 00:24:29.778 "block_size": 512, 00:24:29.778 "num_blocks": 253952, 00:24:29.778 "uuid": "75212252-e2fe-4570-a993-b2757686952e", 00:24:29.778 "assigned_rate_limits": { 00:24:29.778 "rw_ios_per_sec": 0, 00:24:29.778 "rw_mbytes_per_sec": 0, 00:24:29.778 "r_mbytes_per_sec": 0, 00:24:29.778 "w_mbytes_per_sec": 0 00:24:29.778 }, 00:24:29.778 "claimed": false, 00:24:29.778 "zoned": false, 00:24:29.778 "supported_io_types": { 00:24:29.778 "read": true, 00:24:29.778 "write": true, 00:24:29.778 "unmap": true, 00:24:29.778 "flush": true, 00:24:29.778 "reset": true, 00:24:29.778 "nvme_admin": false, 00:24:29.778 "nvme_io": false, 00:24:29.778 "nvme_io_md": false, 00:24:29.778 "write_zeroes": true, 00:24:29.778 "zcopy": false, 00:24:29.778 "get_zone_info": false, 00:24:29.778 "zone_management": false, 00:24:29.778 "zone_append": false, 00:24:29.778 "compare": false, 00:24:29.778 "compare_and_write": false, 00:24:29.778 "abort": false, 00:24:29.778 "seek_hole": false, 00:24:29.778 "seek_data": false, 00:24:29.778 "copy": false, 00:24:29.778 "nvme_iov_md": false 00:24:29.778 }, 00:24:29.778 "memory_domains": [ 00:24:29.778 { 00:24:29.778 "dma_device_id": "system", 00:24:29.778 "dma_device_type": 1 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.778 "dma_device_type": 2 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "dma_device_id": "system", 00:24:29.778 "dma_device_type": 1 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.778 "dma_device_type": 2 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "dma_device_id": "system", 00:24:29.778 "dma_device_type": 1 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.778 "dma_device_type": 2 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "dma_device_id": "system", 00:24:29.778 "dma_device_type": 1 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.778 "dma_device_type": 2 00:24:29.778 } 00:24:29.778 ], 00:24:29.778 "driver_specific": { 00:24:29.778 "raid": { 00:24:29.778 "uuid": "75212252-e2fe-4570-a993-b2757686952e", 00:24:29.778 "strip_size_kb": 64, 00:24:29.778 "state": "online", 00:24:29.778 "raid_level": "concat", 00:24:29.778 "superblock": true, 00:24:29.778 "num_base_bdevs": 4, 00:24:29.778 "num_base_bdevs_discovered": 4, 00:24:29.778 "num_base_bdevs_operational": 4, 00:24:29.778 "base_bdevs_list": [ 00:24:29.778 { 00:24:29.778 "name": "BaseBdev1", 00:24:29.778 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:29.778 "is_configured": true, 00:24:29.778 "data_offset": 2048, 00:24:29.778 "data_size": 63488 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "name": "BaseBdev2", 00:24:29.778 "uuid": "07e5b167-415d-4419-9a26-e16bfa1b4cd9", 00:24:29.778 "is_configured": true, 00:24:29.778 "data_offset": 2048, 00:24:29.778 "data_size": 63488 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "name": "BaseBdev3", 00:24:29.778 "uuid": "28049dcb-a1c7-4577-8492-a58f47e4653b", 00:24:29.778 "is_configured": true, 00:24:29.778 "data_offset": 2048, 00:24:29.778 "data_size": 63488 00:24:29.778 }, 00:24:29.778 { 00:24:29.778 "name": "BaseBdev4", 00:24:29.778 "uuid": "89c08bb5-d85a-40a5-8012-48411ad88aa2", 00:24:29.778 "is_configured": true, 00:24:29.778 "data_offset": 2048, 00:24:29.778 "data_size": 63488 00:24:29.778 } 00:24:29.778 ] 00:24:29.778 } 00:24:29.778 } 00:24:29.778 }' 00:24:29.778 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:29.778 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:29.778 BaseBdev2 00:24:29.778 BaseBdev3 00:24:29.778 BaseBdev4' 00:24:29.778 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:29.778 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:29.778 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:30.038 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.038 "name": "BaseBdev1", 00:24:30.038 "aliases": [ 00:24:30.038 "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946" 00:24:30.038 ], 00:24:30.038 "product_name": "Malloc disk", 00:24:30.038 "block_size": 512, 00:24:30.038 "num_blocks": 65536, 00:24:30.038 "uuid": "0ed3fcfb-de43-41eb-bffe-d4b8e85e9946", 00:24:30.038 "assigned_rate_limits": { 00:24:30.038 "rw_ios_per_sec": 0, 00:24:30.038 "rw_mbytes_per_sec": 0, 00:24:30.038 "r_mbytes_per_sec": 0, 00:24:30.038 "w_mbytes_per_sec": 0 00:24:30.038 }, 00:24:30.038 "claimed": true, 00:24:30.038 "claim_type": "exclusive_write", 00:24:30.038 "zoned": false, 00:24:30.038 "supported_io_types": { 00:24:30.038 "read": true, 00:24:30.038 "write": true, 00:24:30.038 "unmap": true, 00:24:30.038 "flush": true, 00:24:30.038 "reset": true, 00:24:30.038 "nvme_admin": false, 00:24:30.038 "nvme_io": false, 00:24:30.038 "nvme_io_md": false, 00:24:30.038 "write_zeroes": true, 00:24:30.038 "zcopy": true, 00:24:30.038 "get_zone_info": false, 00:24:30.038 "zone_management": false, 00:24:30.038 "zone_append": false, 00:24:30.038 "compare": false, 00:24:30.038 "compare_and_write": false, 00:24:30.038 "abort": true, 00:24:30.038 "seek_hole": false, 00:24:30.038 "seek_data": false, 00:24:30.038 "copy": true, 00:24:30.038 "nvme_iov_md": false 00:24:30.038 }, 00:24:30.038 "memory_domains": [ 00:24:30.038 { 00:24:30.038 "dma_device_id": "system", 00:24:30.038 "dma_device_type": 1 00:24:30.038 }, 00:24:30.038 { 00:24:30.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.038 "dma_device_type": 2 00:24:30.038 } 00:24:30.038 ], 00:24:30.038 "driver_specific": {} 00:24:30.038 }' 00:24:30.038 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.038 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.038 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:30.038 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.038 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:30.297 18:51:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.556 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.556 "name": "BaseBdev2", 00:24:30.556 "aliases": [ 00:24:30.556 "07e5b167-415d-4419-9a26-e16bfa1b4cd9" 00:24:30.556 ], 00:24:30.556 "product_name": "Malloc disk", 00:24:30.556 "block_size": 512, 00:24:30.556 "num_blocks": 65536, 00:24:30.556 "uuid": "07e5b167-415d-4419-9a26-e16bfa1b4cd9", 00:24:30.556 "assigned_rate_limits": { 00:24:30.556 "rw_ios_per_sec": 0, 00:24:30.556 "rw_mbytes_per_sec": 0, 00:24:30.556 "r_mbytes_per_sec": 0, 00:24:30.556 "w_mbytes_per_sec": 0 00:24:30.556 }, 00:24:30.556 "claimed": true, 00:24:30.556 "claim_type": "exclusive_write", 00:24:30.556 "zoned": false, 00:24:30.556 "supported_io_types": { 00:24:30.556 "read": true, 00:24:30.556 "write": true, 00:24:30.556 "unmap": true, 00:24:30.556 "flush": true, 00:24:30.556 "reset": true, 00:24:30.556 "nvme_admin": false, 00:24:30.556 "nvme_io": false, 00:24:30.556 "nvme_io_md": false, 00:24:30.556 "write_zeroes": true, 00:24:30.556 "zcopy": true, 00:24:30.556 "get_zone_info": false, 00:24:30.556 "zone_management": false, 00:24:30.556 "zone_append": false, 00:24:30.556 "compare": false, 00:24:30.556 "compare_and_write": false, 00:24:30.556 "abort": true, 00:24:30.556 "seek_hole": false, 00:24:30.556 "seek_data": false, 00:24:30.556 "copy": true, 00:24:30.556 "nvme_iov_md": false 00:24:30.556 }, 00:24:30.556 "memory_domains": [ 00:24:30.556 { 00:24:30.556 "dma_device_id": "system", 00:24:30.556 "dma_device_type": 1 00:24:30.556 }, 00:24:30.556 { 00:24:30.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.556 "dma_device_type": 2 00:24:30.556 } 00:24:30.556 ], 00:24:30.556 "driver_specific": {} 00:24:30.556 }' 00:24:30.556 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.815 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.074 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.074 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.074 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.074 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:31.074 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.333 "name": "BaseBdev3", 00:24:31.333 "aliases": [ 00:24:31.333 "28049dcb-a1c7-4577-8492-a58f47e4653b" 00:24:31.333 ], 00:24:31.333 "product_name": "Malloc disk", 00:24:31.333 "block_size": 512, 00:24:31.333 "num_blocks": 65536, 00:24:31.333 "uuid": "28049dcb-a1c7-4577-8492-a58f47e4653b", 00:24:31.333 "assigned_rate_limits": { 00:24:31.333 "rw_ios_per_sec": 0, 00:24:31.333 "rw_mbytes_per_sec": 0, 00:24:31.333 "r_mbytes_per_sec": 0, 00:24:31.333 "w_mbytes_per_sec": 0 00:24:31.333 }, 00:24:31.333 "claimed": true, 00:24:31.333 "claim_type": "exclusive_write", 00:24:31.333 "zoned": false, 00:24:31.333 "supported_io_types": { 00:24:31.333 "read": true, 00:24:31.333 "write": true, 00:24:31.333 "unmap": true, 00:24:31.333 "flush": true, 00:24:31.333 "reset": true, 00:24:31.333 "nvme_admin": false, 00:24:31.333 "nvme_io": false, 00:24:31.333 "nvme_io_md": false, 00:24:31.333 "write_zeroes": true, 00:24:31.333 "zcopy": true, 00:24:31.333 "get_zone_info": false, 00:24:31.333 "zone_management": false, 00:24:31.333 "zone_append": false, 00:24:31.333 "compare": false, 00:24:31.333 "compare_and_write": false, 00:24:31.333 "abort": true, 00:24:31.333 "seek_hole": false, 00:24:31.333 "seek_data": false, 00:24:31.333 "copy": true, 00:24:31.333 "nvme_iov_md": false 00:24:31.333 }, 00:24:31.333 "memory_domains": [ 00:24:31.333 { 00:24:31.333 "dma_device_id": "system", 00:24:31.333 "dma_device_type": 1 00:24:31.333 }, 00:24:31.333 { 00:24:31.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.333 "dma_device_type": 2 00:24:31.333 } 00:24:31.333 ], 00:24:31.333 "driver_specific": {} 00:24:31.333 }' 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.333 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.593 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.593 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.593 18:51:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.593 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.593 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.593 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.593 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.593 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:31.865 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.865 "name": "BaseBdev4", 00:24:31.865 "aliases": [ 00:24:31.865 "89c08bb5-d85a-40a5-8012-48411ad88aa2" 00:24:31.865 ], 00:24:31.865 "product_name": "Malloc disk", 00:24:31.865 "block_size": 512, 00:24:31.865 "num_blocks": 65536, 00:24:31.865 "uuid": "89c08bb5-d85a-40a5-8012-48411ad88aa2", 00:24:31.865 "assigned_rate_limits": { 00:24:31.865 "rw_ios_per_sec": 0, 00:24:31.865 "rw_mbytes_per_sec": 0, 00:24:31.866 "r_mbytes_per_sec": 0, 00:24:31.866 "w_mbytes_per_sec": 0 00:24:31.866 }, 00:24:31.866 "claimed": true, 00:24:31.866 "claim_type": "exclusive_write", 00:24:31.866 "zoned": false, 00:24:31.866 "supported_io_types": { 00:24:31.866 "read": true, 00:24:31.866 "write": true, 00:24:31.866 "unmap": true, 00:24:31.866 "flush": true, 00:24:31.866 "reset": true, 00:24:31.866 "nvme_admin": false, 00:24:31.866 "nvme_io": false, 00:24:31.866 "nvme_io_md": false, 00:24:31.866 "write_zeroes": true, 00:24:31.866 "zcopy": true, 00:24:31.866 "get_zone_info": false, 00:24:31.866 "zone_management": false, 00:24:31.866 "zone_append": false, 00:24:31.866 "compare": false, 00:24:31.866 "compare_and_write": false, 00:24:31.866 "abort": true, 00:24:31.866 "seek_hole": false, 00:24:31.866 "seek_data": false, 00:24:31.866 "copy": true, 00:24:31.866 "nvme_iov_md": false 00:24:31.866 }, 00:24:31.866 "memory_domains": [ 00:24:31.866 { 00:24:31.866 "dma_device_id": "system", 00:24:31.866 "dma_device_type": 1 00:24:31.866 }, 00:24:31.866 { 00:24:31.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.866 "dma_device_type": 2 00:24:31.866 } 00:24:31.866 ], 00:24:31.866 "driver_specific": {} 00:24:31.866 }' 00:24:31.866 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.866 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:32.183 18:51:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:32.442 [2024-07-25 18:51:32.937360] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:32.442 [2024-07-25 18:51:32.937575] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.442 [2024-07-25 18:51:32.937792] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.701 "name": "Existed_Raid", 00:24:32.701 "uuid": "75212252-e2fe-4570-a993-b2757686952e", 00:24:32.701 "strip_size_kb": 64, 00:24:32.701 "state": "offline", 00:24:32.701 "raid_level": "concat", 00:24:32.701 "superblock": true, 00:24:32.701 "num_base_bdevs": 4, 00:24:32.701 "num_base_bdevs_discovered": 3, 00:24:32.701 "num_base_bdevs_operational": 3, 00:24:32.701 "base_bdevs_list": [ 00:24:32.701 { 00:24:32.701 "name": null, 00:24:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.701 "is_configured": false, 00:24:32.701 "data_offset": 2048, 00:24:32.701 "data_size": 63488 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "name": "BaseBdev2", 00:24:32.701 "uuid": "07e5b167-415d-4419-9a26-e16bfa1b4cd9", 00:24:32.701 "is_configured": true, 00:24:32.701 "data_offset": 2048, 00:24:32.701 "data_size": 63488 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "name": "BaseBdev3", 00:24:32.701 "uuid": "28049dcb-a1c7-4577-8492-a58f47e4653b", 00:24:32.701 "is_configured": true, 00:24:32.701 "data_offset": 2048, 00:24:32.701 "data_size": 63488 00:24:32.701 }, 00:24:32.701 { 00:24:32.701 "name": "BaseBdev4", 00:24:32.701 "uuid": "89c08bb5-d85a-40a5-8012-48411ad88aa2", 00:24:32.701 "is_configured": true, 00:24:32.701 "data_offset": 2048, 00:24:32.701 "data_size": 63488 00:24:32.701 } 00:24:32.701 ] 00:24:32.701 }' 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.701 18:51:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.636 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:33.636 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:33.636 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.636 18:51:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:33.636 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:33.636 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:33.636 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:33.894 [2024-07-25 18:51:34.237665] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:33.894 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:33.894 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:33.894 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.894 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:34.152 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:34.152 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:34.152 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:34.411 [2024-07-25 18:51:34.849977] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:34.411 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:34.411 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:34.411 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:34.411 18:51:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.670 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:34.670 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:34.670 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:34.930 [2024-07-25 18:51:35.362756] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:34.930 [2024-07-25 18:51:35.363002] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:24:34.930 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:34.930 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:34.930 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.930 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:35.189 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:35.189 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:35.189 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:35.189 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:35.189 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:35.189 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:35.449 BaseBdev2 00:24:35.449 18:51:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:35.449 18:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:35.449 18:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:35.449 18:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:35.449 18:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:35.449 18:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:35.449 18:51:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:35.708 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:35.967 [ 00:24:35.967 { 00:24:35.967 "name": "BaseBdev2", 00:24:35.967 "aliases": [ 00:24:35.967 "4107276a-2de2-49cf-a4e8-a0c0e18f27cf" 00:24:35.967 ], 00:24:35.967 "product_name": "Malloc disk", 00:24:35.967 "block_size": 512, 00:24:35.967 "num_blocks": 65536, 00:24:35.967 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:35.967 "assigned_rate_limits": { 00:24:35.967 "rw_ios_per_sec": 0, 00:24:35.967 "rw_mbytes_per_sec": 0, 00:24:35.967 "r_mbytes_per_sec": 0, 00:24:35.967 "w_mbytes_per_sec": 0 00:24:35.967 }, 00:24:35.967 "claimed": false, 00:24:35.967 "zoned": false, 00:24:35.967 "supported_io_types": { 00:24:35.967 "read": true, 00:24:35.967 "write": true, 00:24:35.967 "unmap": true, 00:24:35.967 "flush": true, 00:24:35.967 "reset": true, 00:24:35.967 "nvme_admin": false, 00:24:35.967 "nvme_io": false, 00:24:35.967 "nvme_io_md": false, 00:24:35.967 "write_zeroes": true, 00:24:35.967 "zcopy": true, 00:24:35.967 "get_zone_info": false, 00:24:35.967 "zone_management": false, 00:24:35.967 "zone_append": false, 00:24:35.967 "compare": false, 00:24:35.967 "compare_and_write": false, 00:24:35.967 "abort": true, 00:24:35.967 "seek_hole": false, 00:24:35.967 "seek_data": false, 00:24:35.967 "copy": true, 00:24:35.967 "nvme_iov_md": false 00:24:35.967 }, 00:24:35.967 "memory_domains": [ 00:24:35.967 { 00:24:35.967 "dma_device_id": "system", 00:24:35.967 "dma_device_type": 1 00:24:35.967 }, 00:24:35.967 { 00:24:35.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.967 "dma_device_type": 2 00:24:35.967 } 00:24:35.967 ], 00:24:35.967 "driver_specific": {} 00:24:35.967 } 00:24:35.967 ] 00:24:35.967 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:35.967 18:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:35.967 18:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:35.967 18:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:36.227 BaseBdev3 00:24:36.227 18:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:36.227 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:36.227 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:36.227 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:36.227 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:36.227 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:36.227 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.486 18:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:36.745 [ 00:24:36.745 { 00:24:36.745 "name": "BaseBdev3", 00:24:36.745 "aliases": [ 00:24:36.745 "20e60537-06f1-4647-a48d-d310192738d1" 00:24:36.745 ], 00:24:36.745 "product_name": "Malloc disk", 00:24:36.745 "block_size": 512, 00:24:36.745 "num_blocks": 65536, 00:24:36.745 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:36.745 "assigned_rate_limits": { 00:24:36.745 "rw_ios_per_sec": 0, 00:24:36.745 "rw_mbytes_per_sec": 0, 00:24:36.745 "r_mbytes_per_sec": 0, 00:24:36.745 "w_mbytes_per_sec": 0 00:24:36.745 }, 00:24:36.745 "claimed": false, 00:24:36.745 "zoned": false, 00:24:36.745 "supported_io_types": { 00:24:36.745 "read": true, 00:24:36.745 "write": true, 00:24:36.745 "unmap": true, 00:24:36.745 "flush": true, 00:24:36.745 "reset": true, 00:24:36.745 "nvme_admin": false, 00:24:36.745 "nvme_io": false, 00:24:36.745 "nvme_io_md": false, 00:24:36.745 "write_zeroes": true, 00:24:36.745 "zcopy": true, 00:24:36.745 "get_zone_info": false, 00:24:36.745 "zone_management": false, 00:24:36.745 "zone_append": false, 00:24:36.745 "compare": false, 00:24:36.745 "compare_and_write": false, 00:24:36.745 "abort": true, 00:24:36.745 "seek_hole": false, 00:24:36.745 "seek_data": false, 00:24:36.745 "copy": true, 00:24:36.745 "nvme_iov_md": false 00:24:36.745 }, 00:24:36.745 "memory_domains": [ 00:24:36.745 { 00:24:36.745 "dma_device_id": "system", 00:24:36.745 "dma_device_type": 1 00:24:36.745 }, 00:24:36.745 { 00:24:36.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.745 "dma_device_type": 2 00:24:36.745 } 00:24:36.745 ], 00:24:36.745 "driver_specific": {} 00:24:36.745 } 00:24:36.745 ] 00:24:36.745 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:36.745 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:36.745 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:36.745 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:37.015 BaseBdev4 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:37.015 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:37.279 [ 00:24:37.279 { 00:24:37.279 "name": "BaseBdev4", 00:24:37.279 "aliases": [ 00:24:37.279 "46f10735-d4ff-4287-a94e-173dee7c13b9" 00:24:37.279 ], 00:24:37.279 "product_name": "Malloc disk", 00:24:37.279 "block_size": 512, 00:24:37.279 "num_blocks": 65536, 00:24:37.279 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:37.279 "assigned_rate_limits": { 00:24:37.279 "rw_ios_per_sec": 0, 00:24:37.279 "rw_mbytes_per_sec": 0, 00:24:37.279 "r_mbytes_per_sec": 0, 00:24:37.279 "w_mbytes_per_sec": 0 00:24:37.279 }, 00:24:37.279 "claimed": false, 00:24:37.279 "zoned": false, 00:24:37.279 "supported_io_types": { 00:24:37.279 "read": true, 00:24:37.279 "write": true, 00:24:37.279 "unmap": true, 00:24:37.279 "flush": true, 00:24:37.279 "reset": true, 00:24:37.279 "nvme_admin": false, 00:24:37.279 "nvme_io": false, 00:24:37.279 "nvme_io_md": false, 00:24:37.279 "write_zeroes": true, 00:24:37.279 "zcopy": true, 00:24:37.279 "get_zone_info": false, 00:24:37.279 "zone_management": false, 00:24:37.279 "zone_append": false, 00:24:37.280 "compare": false, 00:24:37.280 "compare_and_write": false, 00:24:37.280 "abort": true, 00:24:37.280 "seek_hole": false, 00:24:37.280 "seek_data": false, 00:24:37.280 "copy": true, 00:24:37.280 "nvme_iov_md": false 00:24:37.280 }, 00:24:37.280 "memory_domains": [ 00:24:37.280 { 00:24:37.280 "dma_device_id": "system", 00:24:37.280 "dma_device_type": 1 00:24:37.280 }, 00:24:37.280 { 00:24:37.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.280 "dma_device_type": 2 00:24:37.280 } 00:24:37.280 ], 00:24:37.280 "driver_specific": {} 00:24:37.280 } 00:24:37.280 ] 00:24:37.280 18:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:37.280 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:37.280 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:37.280 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:37.538 [2024-07-25 18:51:37.876472] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:37.538 [2024-07-25 18:51:37.876709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:37.538 [2024-07-25 18:51:37.876800] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:37.538 [2024-07-25 18:51:37.879088] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:37.538 [2024-07-25 18:51:37.879274] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.538 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.539 18:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:37.539 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.539 "name": "Existed_Raid", 00:24:37.539 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:37.539 "strip_size_kb": 64, 00:24:37.539 "state": "configuring", 00:24:37.539 "raid_level": "concat", 00:24:37.539 "superblock": true, 00:24:37.539 "num_base_bdevs": 4, 00:24:37.539 "num_base_bdevs_discovered": 3, 00:24:37.539 "num_base_bdevs_operational": 4, 00:24:37.539 "base_bdevs_list": [ 00:24:37.539 { 00:24:37.539 "name": "BaseBdev1", 00:24:37.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.539 "is_configured": false, 00:24:37.539 "data_offset": 0, 00:24:37.539 "data_size": 0 00:24:37.539 }, 00:24:37.539 { 00:24:37.539 "name": "BaseBdev2", 00:24:37.539 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:37.539 "is_configured": true, 00:24:37.539 "data_offset": 2048, 00:24:37.539 "data_size": 63488 00:24:37.539 }, 00:24:37.539 { 00:24:37.539 "name": "BaseBdev3", 00:24:37.539 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:37.539 "is_configured": true, 00:24:37.539 "data_offset": 2048, 00:24:37.539 "data_size": 63488 00:24:37.539 }, 00:24:37.539 { 00:24:37.539 "name": "BaseBdev4", 00:24:37.539 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:37.539 "is_configured": true, 00:24:37.539 "data_offset": 2048, 00:24:37.539 "data_size": 63488 00:24:37.539 } 00:24:37.539 ] 00:24:37.539 }' 00:24:37.539 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.539 18:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.106 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:38.366 [2024-07-25 18:51:38.796580] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.366 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.625 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.625 "name": "Existed_Raid", 00:24:38.625 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:38.625 "strip_size_kb": 64, 00:24:38.625 "state": "configuring", 00:24:38.625 "raid_level": "concat", 00:24:38.625 "superblock": true, 00:24:38.625 "num_base_bdevs": 4, 00:24:38.625 "num_base_bdevs_discovered": 2, 00:24:38.625 "num_base_bdevs_operational": 4, 00:24:38.625 "base_bdevs_list": [ 00:24:38.625 { 00:24:38.625 "name": "BaseBdev1", 00:24:38.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.625 "is_configured": false, 00:24:38.625 "data_offset": 0, 00:24:38.625 "data_size": 0 00:24:38.625 }, 00:24:38.625 { 00:24:38.625 "name": null, 00:24:38.625 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:38.625 "is_configured": false, 00:24:38.625 "data_offset": 2048, 00:24:38.625 "data_size": 63488 00:24:38.625 }, 00:24:38.625 { 00:24:38.625 "name": "BaseBdev3", 00:24:38.625 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:38.625 "is_configured": true, 00:24:38.625 "data_offset": 2048, 00:24:38.625 "data_size": 63488 00:24:38.625 }, 00:24:38.625 { 00:24:38.625 "name": "BaseBdev4", 00:24:38.625 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:38.625 "is_configured": true, 00:24:38.625 "data_offset": 2048, 00:24:38.625 "data_size": 63488 00:24:38.625 } 00:24:38.625 ] 00:24:38.625 }' 00:24:38.625 18:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.625 18:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:39.193 18:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.193 18:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:39.451 18:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:39.451 18:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:39.710 [2024-07-25 18:51:40.062241] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.710 BaseBdev1 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:39.710 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:39.968 [ 00:24:39.968 { 00:24:39.968 "name": "BaseBdev1", 00:24:39.968 "aliases": [ 00:24:39.968 "8f386eb8-75a9-4f10-9b33-da14da4535cc" 00:24:39.968 ], 00:24:39.968 "product_name": "Malloc disk", 00:24:39.968 "block_size": 512, 00:24:39.968 "num_blocks": 65536, 00:24:39.968 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:39.968 "assigned_rate_limits": { 00:24:39.968 "rw_ios_per_sec": 0, 00:24:39.968 "rw_mbytes_per_sec": 0, 00:24:39.968 "r_mbytes_per_sec": 0, 00:24:39.968 "w_mbytes_per_sec": 0 00:24:39.968 }, 00:24:39.968 "claimed": true, 00:24:39.968 "claim_type": "exclusive_write", 00:24:39.968 "zoned": false, 00:24:39.968 "supported_io_types": { 00:24:39.968 "read": true, 00:24:39.968 "write": true, 00:24:39.968 "unmap": true, 00:24:39.968 "flush": true, 00:24:39.968 "reset": true, 00:24:39.968 "nvme_admin": false, 00:24:39.968 "nvme_io": false, 00:24:39.968 "nvme_io_md": false, 00:24:39.968 "write_zeroes": true, 00:24:39.968 "zcopy": true, 00:24:39.968 "get_zone_info": false, 00:24:39.968 "zone_management": false, 00:24:39.968 "zone_append": false, 00:24:39.968 "compare": false, 00:24:39.968 "compare_and_write": false, 00:24:39.968 "abort": true, 00:24:39.968 "seek_hole": false, 00:24:39.968 "seek_data": false, 00:24:39.968 "copy": true, 00:24:39.968 "nvme_iov_md": false 00:24:39.968 }, 00:24:39.968 "memory_domains": [ 00:24:39.968 { 00:24:39.968 "dma_device_id": "system", 00:24:39.968 "dma_device_type": 1 00:24:39.968 }, 00:24:39.968 { 00:24:39.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.968 "dma_device_type": 2 00:24:39.968 } 00:24:39.968 ], 00:24:39.968 "driver_specific": {} 00:24:39.968 } 00:24:39.968 ] 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.968 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.226 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.226 "name": "Existed_Raid", 00:24:40.226 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:40.226 "strip_size_kb": 64, 00:24:40.226 "state": "configuring", 00:24:40.226 "raid_level": "concat", 00:24:40.226 "superblock": true, 00:24:40.226 "num_base_bdevs": 4, 00:24:40.226 "num_base_bdevs_discovered": 3, 00:24:40.226 "num_base_bdevs_operational": 4, 00:24:40.226 "base_bdevs_list": [ 00:24:40.226 { 00:24:40.226 "name": "BaseBdev1", 00:24:40.226 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:40.226 "is_configured": true, 00:24:40.226 "data_offset": 2048, 00:24:40.226 "data_size": 63488 00:24:40.226 }, 00:24:40.226 { 00:24:40.226 "name": null, 00:24:40.226 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:40.226 "is_configured": false, 00:24:40.226 "data_offset": 2048, 00:24:40.226 "data_size": 63488 00:24:40.226 }, 00:24:40.226 { 00:24:40.226 "name": "BaseBdev3", 00:24:40.226 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:40.226 "is_configured": true, 00:24:40.226 "data_offset": 2048, 00:24:40.226 "data_size": 63488 00:24:40.226 }, 00:24:40.226 { 00:24:40.226 "name": "BaseBdev4", 00:24:40.226 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:40.226 "is_configured": true, 00:24:40.226 "data_offset": 2048, 00:24:40.226 "data_size": 63488 00:24:40.226 } 00:24:40.226 ] 00:24:40.226 }' 00:24:40.226 18:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.226 18:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.793 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.793 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:40.793 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:40.793 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:41.052 [2024-07-25 18:51:41.498647] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.052 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.312 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:41.312 "name": "Existed_Raid", 00:24:41.312 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:41.312 "strip_size_kb": 64, 00:24:41.312 "state": "configuring", 00:24:41.312 "raid_level": "concat", 00:24:41.312 "superblock": true, 00:24:41.312 "num_base_bdevs": 4, 00:24:41.312 "num_base_bdevs_discovered": 2, 00:24:41.312 "num_base_bdevs_operational": 4, 00:24:41.312 "base_bdevs_list": [ 00:24:41.312 { 00:24:41.312 "name": "BaseBdev1", 00:24:41.312 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:41.312 "is_configured": true, 00:24:41.312 "data_offset": 2048, 00:24:41.312 "data_size": 63488 00:24:41.312 }, 00:24:41.312 { 00:24:41.312 "name": null, 00:24:41.312 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:41.312 "is_configured": false, 00:24:41.312 "data_offset": 2048, 00:24:41.312 "data_size": 63488 00:24:41.312 }, 00:24:41.312 { 00:24:41.312 "name": null, 00:24:41.312 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:41.312 "is_configured": false, 00:24:41.312 "data_offset": 2048, 00:24:41.312 "data_size": 63488 00:24:41.312 }, 00:24:41.312 { 00:24:41.312 "name": "BaseBdev4", 00:24:41.312 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:41.312 "is_configured": true, 00:24:41.312 "data_offset": 2048, 00:24:41.312 "data_size": 63488 00:24:41.312 } 00:24:41.312 ] 00:24:41.312 }' 00:24:41.312 18:51:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:41.312 18:51:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.880 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.880 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:42.138 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:42.138 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:42.396 [2024-07-25 18:51:42.862942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.396 18:51:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.655 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.655 "name": "Existed_Raid", 00:24:42.655 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:42.655 "strip_size_kb": 64, 00:24:42.655 "state": "configuring", 00:24:42.655 "raid_level": "concat", 00:24:42.655 "superblock": true, 00:24:42.655 "num_base_bdevs": 4, 00:24:42.655 "num_base_bdevs_discovered": 3, 00:24:42.655 "num_base_bdevs_operational": 4, 00:24:42.655 "base_bdevs_list": [ 00:24:42.655 { 00:24:42.655 "name": "BaseBdev1", 00:24:42.655 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:42.655 "is_configured": true, 00:24:42.655 "data_offset": 2048, 00:24:42.655 "data_size": 63488 00:24:42.655 }, 00:24:42.655 { 00:24:42.655 "name": null, 00:24:42.655 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:42.655 "is_configured": false, 00:24:42.655 "data_offset": 2048, 00:24:42.655 "data_size": 63488 00:24:42.655 }, 00:24:42.655 { 00:24:42.655 "name": "BaseBdev3", 00:24:42.655 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:42.655 "is_configured": true, 00:24:42.655 "data_offset": 2048, 00:24:42.655 "data_size": 63488 00:24:42.655 }, 00:24:42.655 { 00:24:42.655 "name": "BaseBdev4", 00:24:42.655 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:42.655 "is_configured": true, 00:24:42.655 "data_offset": 2048, 00:24:42.655 "data_size": 63488 00:24:42.655 } 00:24:42.655 ] 00:24:42.655 }' 00:24:42.655 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.655 18:51:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.221 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:43.221 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.478 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:43.478 18:51:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:43.736 [2024-07-25 18:51:44.103218] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.736 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.994 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:43.994 "name": "Existed_Raid", 00:24:43.994 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:43.994 "strip_size_kb": 64, 00:24:43.994 "state": "configuring", 00:24:43.994 "raid_level": "concat", 00:24:43.994 "superblock": true, 00:24:43.994 "num_base_bdevs": 4, 00:24:43.994 "num_base_bdevs_discovered": 2, 00:24:43.994 "num_base_bdevs_operational": 4, 00:24:43.994 "base_bdevs_list": [ 00:24:43.994 { 00:24:43.994 "name": null, 00:24:43.994 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:43.994 "is_configured": false, 00:24:43.994 "data_offset": 2048, 00:24:43.994 "data_size": 63488 00:24:43.994 }, 00:24:43.994 { 00:24:43.994 "name": null, 00:24:43.994 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:43.994 "is_configured": false, 00:24:43.994 "data_offset": 2048, 00:24:43.994 "data_size": 63488 00:24:43.994 }, 00:24:43.994 { 00:24:43.994 "name": "BaseBdev3", 00:24:43.994 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:43.994 "is_configured": true, 00:24:43.994 "data_offset": 2048, 00:24:43.994 "data_size": 63488 00:24:43.994 }, 00:24:43.994 { 00:24:43.994 "name": "BaseBdev4", 00:24:43.994 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:43.994 "is_configured": true, 00:24:43.994 "data_offset": 2048, 00:24:43.994 "data_size": 63488 00:24:43.994 } 00:24:43.994 ] 00:24:43.994 }' 00:24:43.994 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:43.994 18:51:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:44.561 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.561 18:51:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:44.846 [2024-07-25 18:51:45.386134] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.846 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.129 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.129 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.129 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.129 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.129 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.129 "name": "Existed_Raid", 00:24:45.129 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:45.129 "strip_size_kb": 64, 00:24:45.129 "state": "configuring", 00:24:45.129 "raid_level": "concat", 00:24:45.129 "superblock": true, 00:24:45.129 "num_base_bdevs": 4, 00:24:45.129 "num_base_bdevs_discovered": 3, 00:24:45.129 "num_base_bdevs_operational": 4, 00:24:45.129 "base_bdevs_list": [ 00:24:45.129 { 00:24:45.129 "name": null, 00:24:45.129 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:45.129 "is_configured": false, 00:24:45.129 "data_offset": 2048, 00:24:45.129 "data_size": 63488 00:24:45.129 }, 00:24:45.129 { 00:24:45.129 "name": "BaseBdev2", 00:24:45.129 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:45.129 "is_configured": true, 00:24:45.129 "data_offset": 2048, 00:24:45.129 "data_size": 63488 00:24:45.129 }, 00:24:45.129 { 00:24:45.129 "name": "BaseBdev3", 00:24:45.129 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:45.129 "is_configured": true, 00:24:45.129 "data_offset": 2048, 00:24:45.129 "data_size": 63488 00:24:45.129 }, 00:24:45.129 { 00:24:45.129 "name": "BaseBdev4", 00:24:45.129 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:45.129 "is_configured": true, 00:24:45.129 "data_offset": 2048, 00:24:45.129 "data_size": 63488 00:24:45.129 } 00:24:45.129 ] 00:24:45.129 }' 00:24:45.129 18:51:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.129 18:51:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.697 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.697 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:45.955 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:45.955 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:45.955 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.955 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8f386eb8-75a9-4f10-9b33-da14da4535cc 00:24:46.214 [2024-07-25 18:51:46.775901] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:46.214 [2024-07-25 18:51:46.776367] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:24:46.214 [2024-07-25 18:51:46.776480] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:46.214 [2024-07-25 18:51:46.776631] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:46.214 [2024-07-25 18:51:46.777042] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:24:46.214 [2024-07-25 18:51:46.777083] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:24:46.214 [2024-07-25 18:51:46.777306] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.214 NewBaseBdev 00:24:46.472 18:51:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:46.472 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:24:46.472 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:46.472 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:46.472 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:46.472 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:46.472 18:51:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:46.731 [ 00:24:46.731 { 00:24:46.731 "name": "NewBaseBdev", 00:24:46.731 "aliases": [ 00:24:46.731 "8f386eb8-75a9-4f10-9b33-da14da4535cc" 00:24:46.731 ], 00:24:46.731 "product_name": "Malloc disk", 00:24:46.731 "block_size": 512, 00:24:46.731 "num_blocks": 65536, 00:24:46.731 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:46.731 "assigned_rate_limits": { 00:24:46.731 "rw_ios_per_sec": 0, 00:24:46.731 "rw_mbytes_per_sec": 0, 00:24:46.731 "r_mbytes_per_sec": 0, 00:24:46.731 "w_mbytes_per_sec": 0 00:24:46.731 }, 00:24:46.731 "claimed": true, 00:24:46.731 "claim_type": "exclusive_write", 00:24:46.731 "zoned": false, 00:24:46.731 "supported_io_types": { 00:24:46.731 "read": true, 00:24:46.731 "write": true, 00:24:46.731 "unmap": true, 00:24:46.731 "flush": true, 00:24:46.731 "reset": true, 00:24:46.731 "nvme_admin": false, 00:24:46.731 "nvme_io": false, 00:24:46.731 "nvme_io_md": false, 00:24:46.731 "write_zeroes": true, 00:24:46.731 "zcopy": true, 00:24:46.731 "get_zone_info": false, 00:24:46.731 "zone_management": false, 00:24:46.731 "zone_append": false, 00:24:46.731 "compare": false, 00:24:46.731 "compare_and_write": false, 00:24:46.731 "abort": true, 00:24:46.731 "seek_hole": false, 00:24:46.731 "seek_data": false, 00:24:46.731 "copy": true, 00:24:46.731 "nvme_iov_md": false 00:24:46.731 }, 00:24:46.731 "memory_domains": [ 00:24:46.731 { 00:24:46.731 "dma_device_id": "system", 00:24:46.731 "dma_device_type": 1 00:24:46.731 }, 00:24:46.731 { 00:24:46.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.731 "dma_device_type": 2 00:24:46.731 } 00:24:46.731 ], 00:24:46.731 "driver_specific": {} 00:24:46.731 } 00:24:46.731 ] 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.731 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:46.989 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.989 "name": "Existed_Raid", 00:24:46.989 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:46.989 "strip_size_kb": 64, 00:24:46.989 "state": "online", 00:24:46.989 "raid_level": "concat", 00:24:46.989 "superblock": true, 00:24:46.989 "num_base_bdevs": 4, 00:24:46.989 "num_base_bdevs_discovered": 4, 00:24:46.989 "num_base_bdevs_operational": 4, 00:24:46.989 "base_bdevs_list": [ 00:24:46.989 { 00:24:46.989 "name": "NewBaseBdev", 00:24:46.989 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:46.989 "is_configured": true, 00:24:46.989 "data_offset": 2048, 00:24:46.989 "data_size": 63488 00:24:46.989 }, 00:24:46.989 { 00:24:46.989 "name": "BaseBdev2", 00:24:46.989 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:46.989 "is_configured": true, 00:24:46.989 "data_offset": 2048, 00:24:46.989 "data_size": 63488 00:24:46.989 }, 00:24:46.989 { 00:24:46.989 "name": "BaseBdev3", 00:24:46.989 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:46.989 "is_configured": true, 00:24:46.989 "data_offset": 2048, 00:24:46.989 "data_size": 63488 00:24:46.989 }, 00:24:46.989 { 00:24:46.989 "name": "BaseBdev4", 00:24:46.989 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:46.989 "is_configured": true, 00:24:46.989 "data_offset": 2048, 00:24:46.989 "data_size": 63488 00:24:46.989 } 00:24:46.989 ] 00:24:46.989 }' 00:24:46.989 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.989 18:51:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:47.553 18:51:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:47.810 [2024-07-25 18:51:48.156965] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:47.810 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:47.810 "name": "Existed_Raid", 00:24:47.810 "aliases": [ 00:24:47.810 "fa1707f8-844a-44d2-97e5-68731909fbee" 00:24:47.810 ], 00:24:47.810 "product_name": "Raid Volume", 00:24:47.810 "block_size": 512, 00:24:47.810 "num_blocks": 253952, 00:24:47.810 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:47.810 "assigned_rate_limits": { 00:24:47.810 "rw_ios_per_sec": 0, 00:24:47.810 "rw_mbytes_per_sec": 0, 00:24:47.810 "r_mbytes_per_sec": 0, 00:24:47.811 "w_mbytes_per_sec": 0 00:24:47.811 }, 00:24:47.811 "claimed": false, 00:24:47.811 "zoned": false, 00:24:47.811 "supported_io_types": { 00:24:47.811 "read": true, 00:24:47.811 "write": true, 00:24:47.811 "unmap": true, 00:24:47.811 "flush": true, 00:24:47.811 "reset": true, 00:24:47.811 "nvme_admin": false, 00:24:47.811 "nvme_io": false, 00:24:47.811 "nvme_io_md": false, 00:24:47.811 "write_zeroes": true, 00:24:47.811 "zcopy": false, 00:24:47.811 "get_zone_info": false, 00:24:47.811 "zone_management": false, 00:24:47.811 "zone_append": false, 00:24:47.811 "compare": false, 00:24:47.811 "compare_and_write": false, 00:24:47.811 "abort": false, 00:24:47.811 "seek_hole": false, 00:24:47.811 "seek_data": false, 00:24:47.811 "copy": false, 00:24:47.811 "nvme_iov_md": false 00:24:47.811 }, 00:24:47.811 "memory_domains": [ 00:24:47.811 { 00:24:47.811 "dma_device_id": "system", 00:24:47.811 "dma_device_type": 1 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.811 "dma_device_type": 2 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "dma_device_id": "system", 00:24:47.811 "dma_device_type": 1 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.811 "dma_device_type": 2 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "dma_device_id": "system", 00:24:47.811 "dma_device_type": 1 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.811 "dma_device_type": 2 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "dma_device_id": "system", 00:24:47.811 "dma_device_type": 1 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.811 "dma_device_type": 2 00:24:47.811 } 00:24:47.811 ], 00:24:47.811 "driver_specific": { 00:24:47.811 "raid": { 00:24:47.811 "uuid": "fa1707f8-844a-44d2-97e5-68731909fbee", 00:24:47.811 "strip_size_kb": 64, 00:24:47.811 "state": "online", 00:24:47.811 "raid_level": "concat", 00:24:47.811 "superblock": true, 00:24:47.811 "num_base_bdevs": 4, 00:24:47.811 "num_base_bdevs_discovered": 4, 00:24:47.811 "num_base_bdevs_operational": 4, 00:24:47.811 "base_bdevs_list": [ 00:24:47.811 { 00:24:47.811 "name": "NewBaseBdev", 00:24:47.811 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:47.811 "is_configured": true, 00:24:47.811 "data_offset": 2048, 00:24:47.811 "data_size": 63488 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "name": "BaseBdev2", 00:24:47.811 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:47.811 "is_configured": true, 00:24:47.811 "data_offset": 2048, 00:24:47.811 "data_size": 63488 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "name": "BaseBdev3", 00:24:47.811 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:47.811 "is_configured": true, 00:24:47.811 "data_offset": 2048, 00:24:47.811 "data_size": 63488 00:24:47.811 }, 00:24:47.811 { 00:24:47.811 "name": "BaseBdev4", 00:24:47.811 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:47.811 "is_configured": true, 00:24:47.811 "data_offset": 2048, 00:24:47.811 "data_size": 63488 00:24:47.811 } 00:24:47.811 ] 00:24:47.811 } 00:24:47.811 } 00:24:47.811 }' 00:24:47.811 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:47.811 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:47.811 BaseBdev2 00:24:47.811 BaseBdev3 00:24:47.811 BaseBdev4' 00:24:47.811 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:47.811 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:47.811 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:48.068 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:48.068 "name": "NewBaseBdev", 00:24:48.068 "aliases": [ 00:24:48.068 "8f386eb8-75a9-4f10-9b33-da14da4535cc" 00:24:48.068 ], 00:24:48.068 "product_name": "Malloc disk", 00:24:48.068 "block_size": 512, 00:24:48.068 "num_blocks": 65536, 00:24:48.068 "uuid": "8f386eb8-75a9-4f10-9b33-da14da4535cc", 00:24:48.068 "assigned_rate_limits": { 00:24:48.068 "rw_ios_per_sec": 0, 00:24:48.068 "rw_mbytes_per_sec": 0, 00:24:48.068 "r_mbytes_per_sec": 0, 00:24:48.068 "w_mbytes_per_sec": 0 00:24:48.068 }, 00:24:48.068 "claimed": true, 00:24:48.068 "claim_type": "exclusive_write", 00:24:48.068 "zoned": false, 00:24:48.068 "supported_io_types": { 00:24:48.068 "read": true, 00:24:48.068 "write": true, 00:24:48.068 "unmap": true, 00:24:48.068 "flush": true, 00:24:48.068 "reset": true, 00:24:48.068 "nvme_admin": false, 00:24:48.068 "nvme_io": false, 00:24:48.068 "nvme_io_md": false, 00:24:48.068 "write_zeroes": true, 00:24:48.068 "zcopy": true, 00:24:48.068 "get_zone_info": false, 00:24:48.068 "zone_management": false, 00:24:48.068 "zone_append": false, 00:24:48.068 "compare": false, 00:24:48.068 "compare_and_write": false, 00:24:48.068 "abort": true, 00:24:48.068 "seek_hole": false, 00:24:48.068 "seek_data": false, 00:24:48.068 "copy": true, 00:24:48.068 "nvme_iov_md": false 00:24:48.068 }, 00:24:48.068 "memory_domains": [ 00:24:48.068 { 00:24:48.068 "dma_device_id": "system", 00:24:48.068 "dma_device_type": 1 00:24:48.068 }, 00:24:48.068 { 00:24:48.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.068 "dma_device_type": 2 00:24:48.068 } 00:24:48.068 ], 00:24:48.068 "driver_specific": {} 00:24:48.068 }' 00:24:48.068 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.068 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.068 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:48.068 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.068 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.324 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:48.325 18:51:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:48.581 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:48.581 "name": "BaseBdev2", 00:24:48.581 "aliases": [ 00:24:48.581 "4107276a-2de2-49cf-a4e8-a0c0e18f27cf" 00:24:48.581 ], 00:24:48.581 "product_name": "Malloc disk", 00:24:48.581 "block_size": 512, 00:24:48.581 "num_blocks": 65536, 00:24:48.581 "uuid": "4107276a-2de2-49cf-a4e8-a0c0e18f27cf", 00:24:48.581 "assigned_rate_limits": { 00:24:48.581 "rw_ios_per_sec": 0, 00:24:48.581 "rw_mbytes_per_sec": 0, 00:24:48.581 "r_mbytes_per_sec": 0, 00:24:48.581 "w_mbytes_per_sec": 0 00:24:48.581 }, 00:24:48.581 "claimed": true, 00:24:48.581 "claim_type": "exclusive_write", 00:24:48.581 "zoned": false, 00:24:48.581 "supported_io_types": { 00:24:48.581 "read": true, 00:24:48.581 "write": true, 00:24:48.581 "unmap": true, 00:24:48.581 "flush": true, 00:24:48.581 "reset": true, 00:24:48.581 "nvme_admin": false, 00:24:48.581 "nvme_io": false, 00:24:48.581 "nvme_io_md": false, 00:24:48.581 "write_zeroes": true, 00:24:48.581 "zcopy": true, 00:24:48.581 "get_zone_info": false, 00:24:48.581 "zone_management": false, 00:24:48.581 "zone_append": false, 00:24:48.581 "compare": false, 00:24:48.581 "compare_and_write": false, 00:24:48.581 "abort": true, 00:24:48.581 "seek_hole": false, 00:24:48.581 "seek_data": false, 00:24:48.581 "copy": true, 00:24:48.581 "nvme_iov_md": false 00:24:48.581 }, 00:24:48.581 "memory_domains": [ 00:24:48.581 { 00:24:48.581 "dma_device_id": "system", 00:24:48.581 "dma_device_type": 1 00:24:48.581 }, 00:24:48.581 { 00:24:48.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.581 "dma_device_type": 2 00:24:48.581 } 00:24:48.581 ], 00:24:48.581 "driver_specific": {} 00:24:48.581 }' 00:24:48.581 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.581 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.581 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:48.581 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:48.839 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.098 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.098 "name": "BaseBdev3", 00:24:49.098 "aliases": [ 00:24:49.098 "20e60537-06f1-4647-a48d-d310192738d1" 00:24:49.098 ], 00:24:49.098 "product_name": "Malloc disk", 00:24:49.098 "block_size": 512, 00:24:49.098 "num_blocks": 65536, 00:24:49.098 "uuid": "20e60537-06f1-4647-a48d-d310192738d1", 00:24:49.098 "assigned_rate_limits": { 00:24:49.098 "rw_ios_per_sec": 0, 00:24:49.098 "rw_mbytes_per_sec": 0, 00:24:49.098 "r_mbytes_per_sec": 0, 00:24:49.098 "w_mbytes_per_sec": 0 00:24:49.098 }, 00:24:49.098 "claimed": true, 00:24:49.098 "claim_type": "exclusive_write", 00:24:49.098 "zoned": false, 00:24:49.098 "supported_io_types": { 00:24:49.098 "read": true, 00:24:49.098 "write": true, 00:24:49.098 "unmap": true, 00:24:49.098 "flush": true, 00:24:49.098 "reset": true, 00:24:49.098 "nvme_admin": false, 00:24:49.098 "nvme_io": false, 00:24:49.098 "nvme_io_md": false, 00:24:49.098 "write_zeroes": true, 00:24:49.098 "zcopy": true, 00:24:49.098 "get_zone_info": false, 00:24:49.098 "zone_management": false, 00:24:49.098 "zone_append": false, 00:24:49.098 "compare": false, 00:24:49.098 "compare_and_write": false, 00:24:49.098 "abort": true, 00:24:49.098 "seek_hole": false, 00:24:49.098 "seek_data": false, 00:24:49.098 "copy": true, 00:24:49.098 "nvme_iov_md": false 00:24:49.098 }, 00:24:49.098 "memory_domains": [ 00:24:49.098 { 00:24:49.098 "dma_device_id": "system", 00:24:49.098 "dma_device_type": 1 00:24:49.098 }, 00:24:49.098 { 00:24:49.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.098 "dma_device_type": 2 00:24:49.098 } 00:24:49.098 ], 00:24:49.098 "driver_specific": {} 00:24:49.098 }' 00:24:49.098 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.356 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.356 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:49.356 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.356 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.356 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:49.356 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.356 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.357 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:49.357 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.357 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.615 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:49.615 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:49.615 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.615 18:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.873 "name": "BaseBdev4", 00:24:49.873 "aliases": [ 00:24:49.873 "46f10735-d4ff-4287-a94e-173dee7c13b9" 00:24:49.873 ], 00:24:49.873 "product_name": "Malloc disk", 00:24:49.873 "block_size": 512, 00:24:49.873 "num_blocks": 65536, 00:24:49.873 "uuid": "46f10735-d4ff-4287-a94e-173dee7c13b9", 00:24:49.873 "assigned_rate_limits": { 00:24:49.873 "rw_ios_per_sec": 0, 00:24:49.873 "rw_mbytes_per_sec": 0, 00:24:49.873 "r_mbytes_per_sec": 0, 00:24:49.873 "w_mbytes_per_sec": 0 00:24:49.873 }, 00:24:49.873 "claimed": true, 00:24:49.873 "claim_type": "exclusive_write", 00:24:49.873 "zoned": false, 00:24:49.873 "supported_io_types": { 00:24:49.873 "read": true, 00:24:49.873 "write": true, 00:24:49.873 "unmap": true, 00:24:49.873 "flush": true, 00:24:49.873 "reset": true, 00:24:49.873 "nvme_admin": false, 00:24:49.873 "nvme_io": false, 00:24:49.873 "nvme_io_md": false, 00:24:49.873 "write_zeroes": true, 00:24:49.873 "zcopy": true, 00:24:49.873 "get_zone_info": false, 00:24:49.873 "zone_management": false, 00:24:49.873 "zone_append": false, 00:24:49.873 "compare": false, 00:24:49.873 "compare_and_write": false, 00:24:49.873 "abort": true, 00:24:49.873 "seek_hole": false, 00:24:49.873 "seek_data": false, 00:24:49.873 "copy": true, 00:24:49.873 "nvme_iov_md": false 00:24:49.873 }, 00:24:49.873 "memory_domains": [ 00:24:49.873 { 00:24:49.873 "dma_device_id": "system", 00:24:49.873 "dma_device_type": 1 00:24:49.873 }, 00:24:49.873 { 00:24:49.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.873 "dma_device_type": 2 00:24:49.873 } 00:24:49.873 ], 00:24:49.873 "driver_specific": {} 00:24:49.873 }' 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.873 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.132 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:50.132 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.132 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.132 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:50.132 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:50.390 [2024-07-25 18:51:50.798489] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:50.390 [2024-07-25 18:51:50.798527] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.390 [2024-07-25 18:51:50.798612] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.390 [2024-07-25 18:51:50.798687] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.390 [2024-07-25 18:51:50.798696] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 137931 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 137931 ']' 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 137931 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 137931 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.390 killing process with pid 137931 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 137931' 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 137931 00:24:50.390 [2024-07-25 18:51:50.844927] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.390 18:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 137931 00:24:50.649 [2024-07-25 18:51:51.178007] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:52.025 18:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:52.025 ************************************ 00:24:52.025 END TEST raid_state_function_test_sb 00:24:52.025 ************************************ 00:24:52.025 00:24:52.025 real 0m31.669s 00:24:52.025 user 0m56.594s 00:24:52.025 sys 0m5.371s 00:24:52.025 18:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:52.025 18:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:52.025 18:51:52 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:24:52.025 18:51:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:52.025 18:51:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:52.025 18:51:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:52.025 ************************************ 00:24:52.025 START TEST raid_superblock_test 00:24:52.025 ************************************ 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=138997 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 138997 /var/tmp/spdk-raid.sock 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 138997 ']' 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.025 18:51:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.025 [2024-07-25 18:51:52.517377] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:52.025 [2024-07-25 18:51:52.517628] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138997 ] 00:24:52.284 [2024-07-25 18:51:52.707411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.542 [2024-07-25 18:51:52.953215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.801 [2024-07-25 18:51:53.140868] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:53.059 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:53.317 malloc1 00:24:53.317 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:53.575 [2024-07-25 18:51:53.963419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:53.575 [2024-07-25 18:51:53.963530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.575 [2024-07-25 18:51:53.963587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:53.575 [2024-07-25 18:51:53.963609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.575 [2024-07-25 18:51:53.966325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.575 [2024-07-25 18:51:53.966374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:53.576 pt1 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:53.576 18:51:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:53.835 malloc2 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:53.835 [2024-07-25 18:51:54.375352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:53.835 [2024-07-25 18:51:54.375498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.835 [2024-07-25 18:51:54.375536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:53.835 [2024-07-25 18:51:54.375557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.835 [2024-07-25 18:51:54.378192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.835 [2024-07-25 18:51:54.378255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:53.835 pt2 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:53.835 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:54.095 malloc3 00:24:54.095 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:54.365 [2024-07-25 18:51:54.769823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:54.365 [2024-07-25 18:51:54.769938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.365 [2024-07-25 18:51:54.769972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:54.365 [2024-07-25 18:51:54.770001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.365 [2024-07-25 18:51:54.772588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.365 [2024-07-25 18:51:54.772641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:54.365 pt3 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:54.365 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:54.631 malloc4 00:24:54.631 18:51:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:54.631 [2024-07-25 18:51:55.207332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:54.631 [2024-07-25 18:51:55.207465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.890 [2024-07-25 18:51:55.207501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:54.890 [2024-07-25 18:51:55.207534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.890 [2024-07-25 18:51:55.210140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.890 [2024-07-25 18:51:55.210193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:54.890 pt4 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:54.890 [2024-07-25 18:51:55.391433] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:54.890 [2024-07-25 18:51:55.393713] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:54.890 [2024-07-25 18:51:55.393802] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:54.890 [2024-07-25 18:51:55.393876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:54.890 [2024-07-25 18:51:55.394049] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:24:54.890 [2024-07-25 18:51:55.394074] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:54.890 [2024-07-25 18:51:55.394261] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:54.890 [2024-07-25 18:51:55.394616] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:24:54.890 [2024-07-25 18:51:55.394636] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:24:54.890 [2024-07-25 18:51:55.394806] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.890 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.148 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.148 "name": "raid_bdev1", 00:24:55.148 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:24:55.148 "strip_size_kb": 64, 00:24:55.148 "state": "online", 00:24:55.148 "raid_level": "concat", 00:24:55.148 "superblock": true, 00:24:55.148 "num_base_bdevs": 4, 00:24:55.148 "num_base_bdevs_discovered": 4, 00:24:55.148 "num_base_bdevs_operational": 4, 00:24:55.148 "base_bdevs_list": [ 00:24:55.148 { 00:24:55.148 "name": "pt1", 00:24:55.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:55.148 "is_configured": true, 00:24:55.148 "data_offset": 2048, 00:24:55.148 "data_size": 63488 00:24:55.148 }, 00:24:55.148 { 00:24:55.148 "name": "pt2", 00:24:55.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:55.148 "is_configured": true, 00:24:55.148 "data_offset": 2048, 00:24:55.148 "data_size": 63488 00:24:55.148 }, 00:24:55.148 { 00:24:55.148 "name": "pt3", 00:24:55.148 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:55.148 "is_configured": true, 00:24:55.148 "data_offset": 2048, 00:24:55.148 "data_size": 63488 00:24:55.148 }, 00:24:55.148 { 00:24:55.148 "name": "pt4", 00:24:55.148 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:55.148 "is_configured": true, 00:24:55.148 "data_offset": 2048, 00:24:55.148 "data_size": 63488 00:24:55.148 } 00:24:55.148 ] 00:24:55.148 }' 00:24:55.148 18:51:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.148 18:51:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:55.716 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:55.975 [2024-07-25 18:51:56.315748] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:55.975 "name": "raid_bdev1", 00:24:55.975 "aliases": [ 00:24:55.975 "0db38e49-373a-4341-a565-675a3e26a23a" 00:24:55.975 ], 00:24:55.975 "product_name": "Raid Volume", 00:24:55.975 "block_size": 512, 00:24:55.975 "num_blocks": 253952, 00:24:55.975 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:24:55.975 "assigned_rate_limits": { 00:24:55.975 "rw_ios_per_sec": 0, 00:24:55.975 "rw_mbytes_per_sec": 0, 00:24:55.975 "r_mbytes_per_sec": 0, 00:24:55.975 "w_mbytes_per_sec": 0 00:24:55.975 }, 00:24:55.975 "claimed": false, 00:24:55.975 "zoned": false, 00:24:55.975 "supported_io_types": { 00:24:55.975 "read": true, 00:24:55.975 "write": true, 00:24:55.975 "unmap": true, 00:24:55.975 "flush": true, 00:24:55.975 "reset": true, 00:24:55.975 "nvme_admin": false, 00:24:55.975 "nvme_io": false, 00:24:55.975 "nvme_io_md": false, 00:24:55.975 "write_zeroes": true, 00:24:55.975 "zcopy": false, 00:24:55.975 "get_zone_info": false, 00:24:55.975 "zone_management": false, 00:24:55.975 "zone_append": false, 00:24:55.975 "compare": false, 00:24:55.975 "compare_and_write": false, 00:24:55.975 "abort": false, 00:24:55.975 "seek_hole": false, 00:24:55.975 "seek_data": false, 00:24:55.975 "copy": false, 00:24:55.975 "nvme_iov_md": false 00:24:55.975 }, 00:24:55.975 "memory_domains": [ 00:24:55.975 { 00:24:55.975 "dma_device_id": "system", 00:24:55.975 "dma_device_type": 1 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.975 "dma_device_type": 2 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "system", 00:24:55.975 "dma_device_type": 1 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.975 "dma_device_type": 2 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "system", 00:24:55.975 "dma_device_type": 1 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.975 "dma_device_type": 2 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "system", 00:24:55.975 "dma_device_type": 1 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.975 "dma_device_type": 2 00:24:55.975 } 00:24:55.975 ], 00:24:55.975 "driver_specific": { 00:24:55.975 "raid": { 00:24:55.975 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:24:55.975 "strip_size_kb": 64, 00:24:55.975 "state": "online", 00:24:55.975 "raid_level": "concat", 00:24:55.975 "superblock": true, 00:24:55.975 "num_base_bdevs": 4, 00:24:55.975 "num_base_bdevs_discovered": 4, 00:24:55.975 "num_base_bdevs_operational": 4, 00:24:55.975 "base_bdevs_list": [ 00:24:55.975 { 00:24:55.975 "name": "pt1", 00:24:55.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:55.975 "is_configured": true, 00:24:55.975 "data_offset": 2048, 00:24:55.975 "data_size": 63488 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "name": "pt2", 00:24:55.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:55.975 "is_configured": true, 00:24:55.975 "data_offset": 2048, 00:24:55.975 "data_size": 63488 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "name": "pt3", 00:24:55.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:55.975 "is_configured": true, 00:24:55.975 "data_offset": 2048, 00:24:55.975 "data_size": 63488 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "name": "pt4", 00:24:55.975 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:55.975 "is_configured": true, 00:24:55.975 "data_offset": 2048, 00:24:55.975 "data_size": 63488 00:24:55.975 } 00:24:55.975 ] 00:24:55.975 } 00:24:55.975 } 00:24:55.975 }' 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:55.975 pt2 00:24:55.975 pt3 00:24:55.975 pt4' 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:55.975 "name": "pt1", 00:24:55.975 "aliases": [ 00:24:55.975 "00000000-0000-0000-0000-000000000001" 00:24:55.975 ], 00:24:55.975 "product_name": "passthru", 00:24:55.975 "block_size": 512, 00:24:55.975 "num_blocks": 65536, 00:24:55.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:55.975 "assigned_rate_limits": { 00:24:55.975 "rw_ios_per_sec": 0, 00:24:55.975 "rw_mbytes_per_sec": 0, 00:24:55.975 "r_mbytes_per_sec": 0, 00:24:55.975 "w_mbytes_per_sec": 0 00:24:55.975 }, 00:24:55.975 "claimed": true, 00:24:55.975 "claim_type": "exclusive_write", 00:24:55.975 "zoned": false, 00:24:55.975 "supported_io_types": { 00:24:55.975 "read": true, 00:24:55.975 "write": true, 00:24:55.975 "unmap": true, 00:24:55.975 "flush": true, 00:24:55.975 "reset": true, 00:24:55.975 "nvme_admin": false, 00:24:55.975 "nvme_io": false, 00:24:55.975 "nvme_io_md": false, 00:24:55.975 "write_zeroes": true, 00:24:55.975 "zcopy": true, 00:24:55.975 "get_zone_info": false, 00:24:55.975 "zone_management": false, 00:24:55.975 "zone_append": false, 00:24:55.975 "compare": false, 00:24:55.975 "compare_and_write": false, 00:24:55.975 "abort": true, 00:24:55.975 "seek_hole": false, 00:24:55.975 "seek_data": false, 00:24:55.975 "copy": true, 00:24:55.975 "nvme_iov_md": false 00:24:55.975 }, 00:24:55.975 "memory_domains": [ 00:24:55.975 { 00:24:55.975 "dma_device_id": "system", 00:24:55.975 "dma_device_type": 1 00:24:55.975 }, 00:24:55.975 { 00:24:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.975 "dma_device_type": 2 00:24:55.975 } 00:24:55.975 ], 00:24:55.975 "driver_specific": { 00:24:55.975 "passthru": { 00:24:55.975 "name": "pt1", 00:24:55.975 "base_bdev_name": "malloc1" 00:24:55.975 } 00:24:55.975 } 00:24:55.975 }' 00:24:55.975 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:56.233 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:56.492 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:56.492 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:56.492 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:56.492 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:56.492 18:51:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:56.750 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:56.750 "name": "pt2", 00:24:56.750 "aliases": [ 00:24:56.750 "00000000-0000-0000-0000-000000000002" 00:24:56.750 ], 00:24:56.750 "product_name": "passthru", 00:24:56.750 "block_size": 512, 00:24:56.750 "num_blocks": 65536, 00:24:56.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:56.750 "assigned_rate_limits": { 00:24:56.750 "rw_ios_per_sec": 0, 00:24:56.750 "rw_mbytes_per_sec": 0, 00:24:56.750 "r_mbytes_per_sec": 0, 00:24:56.750 "w_mbytes_per_sec": 0 00:24:56.750 }, 00:24:56.750 "claimed": true, 00:24:56.750 "claim_type": "exclusive_write", 00:24:56.750 "zoned": false, 00:24:56.750 "supported_io_types": { 00:24:56.750 "read": true, 00:24:56.750 "write": true, 00:24:56.750 "unmap": true, 00:24:56.750 "flush": true, 00:24:56.750 "reset": true, 00:24:56.750 "nvme_admin": false, 00:24:56.750 "nvme_io": false, 00:24:56.750 "nvme_io_md": false, 00:24:56.750 "write_zeroes": true, 00:24:56.750 "zcopy": true, 00:24:56.750 "get_zone_info": false, 00:24:56.750 "zone_management": false, 00:24:56.750 "zone_append": false, 00:24:56.750 "compare": false, 00:24:56.750 "compare_and_write": false, 00:24:56.750 "abort": true, 00:24:56.750 "seek_hole": false, 00:24:56.750 "seek_data": false, 00:24:56.750 "copy": true, 00:24:56.750 "nvme_iov_md": false 00:24:56.750 }, 00:24:56.750 "memory_domains": [ 00:24:56.750 { 00:24:56.750 "dma_device_id": "system", 00:24:56.750 "dma_device_type": 1 00:24:56.750 }, 00:24:56.750 { 00:24:56.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.750 "dma_device_type": 2 00:24:56.750 } 00:24:56.750 ], 00:24:56.750 "driver_specific": { 00:24:56.750 "passthru": { 00:24:56.750 "name": "pt2", 00:24:56.750 "base_bdev_name": "malloc2" 00:24:56.750 } 00:24:56.750 } 00:24:56.750 }' 00:24:56.750 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.750 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.750 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:56.750 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.751 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.751 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:56.751 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:57.009 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:57.286 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:57.286 "name": "pt3", 00:24:57.286 "aliases": [ 00:24:57.286 "00000000-0000-0000-0000-000000000003" 00:24:57.286 ], 00:24:57.286 "product_name": "passthru", 00:24:57.286 "block_size": 512, 00:24:57.286 "num_blocks": 65536, 00:24:57.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:57.286 "assigned_rate_limits": { 00:24:57.286 "rw_ios_per_sec": 0, 00:24:57.286 "rw_mbytes_per_sec": 0, 00:24:57.286 "r_mbytes_per_sec": 0, 00:24:57.286 "w_mbytes_per_sec": 0 00:24:57.286 }, 00:24:57.286 "claimed": true, 00:24:57.286 "claim_type": "exclusive_write", 00:24:57.286 "zoned": false, 00:24:57.286 "supported_io_types": { 00:24:57.286 "read": true, 00:24:57.286 "write": true, 00:24:57.286 "unmap": true, 00:24:57.286 "flush": true, 00:24:57.286 "reset": true, 00:24:57.286 "nvme_admin": false, 00:24:57.286 "nvme_io": false, 00:24:57.286 "nvme_io_md": false, 00:24:57.286 "write_zeroes": true, 00:24:57.286 "zcopy": true, 00:24:57.286 "get_zone_info": false, 00:24:57.286 "zone_management": false, 00:24:57.286 "zone_append": false, 00:24:57.286 "compare": false, 00:24:57.286 "compare_and_write": false, 00:24:57.286 "abort": true, 00:24:57.286 "seek_hole": false, 00:24:57.286 "seek_data": false, 00:24:57.286 "copy": true, 00:24:57.286 "nvme_iov_md": false 00:24:57.286 }, 00:24:57.286 "memory_domains": [ 00:24:57.286 { 00:24:57.286 "dma_device_id": "system", 00:24:57.286 "dma_device_type": 1 00:24:57.286 }, 00:24:57.286 { 00:24:57.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.286 "dma_device_type": 2 00:24:57.286 } 00:24:57.286 ], 00:24:57.286 "driver_specific": { 00:24:57.286 "passthru": { 00:24:57.286 "name": "pt3", 00:24:57.286 "base_bdev_name": "malloc3" 00:24:57.286 } 00:24:57.286 } 00:24:57.286 }' 00:24:57.286 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.286 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.286 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:57.286 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.287 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.287 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:57.287 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.545 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.545 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:57.545 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.545 18:51:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.545 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:57.545 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:57.545 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:57.545 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:57.804 "name": "pt4", 00:24:57.804 "aliases": [ 00:24:57.804 "00000000-0000-0000-0000-000000000004" 00:24:57.804 ], 00:24:57.804 "product_name": "passthru", 00:24:57.804 "block_size": 512, 00:24:57.804 "num_blocks": 65536, 00:24:57.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:57.804 "assigned_rate_limits": { 00:24:57.804 "rw_ios_per_sec": 0, 00:24:57.804 "rw_mbytes_per_sec": 0, 00:24:57.804 "r_mbytes_per_sec": 0, 00:24:57.804 "w_mbytes_per_sec": 0 00:24:57.804 }, 00:24:57.804 "claimed": true, 00:24:57.804 "claim_type": "exclusive_write", 00:24:57.804 "zoned": false, 00:24:57.804 "supported_io_types": { 00:24:57.804 "read": true, 00:24:57.804 "write": true, 00:24:57.804 "unmap": true, 00:24:57.804 "flush": true, 00:24:57.804 "reset": true, 00:24:57.804 "nvme_admin": false, 00:24:57.804 "nvme_io": false, 00:24:57.804 "nvme_io_md": false, 00:24:57.804 "write_zeroes": true, 00:24:57.804 "zcopy": true, 00:24:57.804 "get_zone_info": false, 00:24:57.804 "zone_management": false, 00:24:57.804 "zone_append": false, 00:24:57.804 "compare": false, 00:24:57.804 "compare_and_write": false, 00:24:57.804 "abort": true, 00:24:57.804 "seek_hole": false, 00:24:57.804 "seek_data": false, 00:24:57.804 "copy": true, 00:24:57.804 "nvme_iov_md": false 00:24:57.804 }, 00:24:57.804 "memory_domains": [ 00:24:57.804 { 00:24:57.804 "dma_device_id": "system", 00:24:57.804 "dma_device_type": 1 00:24:57.804 }, 00:24:57.804 { 00:24:57.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.804 "dma_device_type": 2 00:24:57.804 } 00:24:57.804 ], 00:24:57.804 "driver_specific": { 00:24:57.804 "passthru": { 00:24:57.804 "name": "pt4", 00:24:57.804 "base_bdev_name": "malloc4" 00:24:57.804 } 00:24:57.804 } 00:24:57.804 }' 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:57.804 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.064 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.064 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.064 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.064 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.064 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:58.064 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:58.064 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:24:58.323 [2024-07-25 18:51:58.760201] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:58.323 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=0db38e49-373a-4341-a565-675a3e26a23a 00:24:58.323 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 0db38e49-373a-4341-a565-675a3e26a23a ']' 00:24:58.323 18:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:58.582 [2024-07-25 18:51:59.024022] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:58.582 [2024-07-25 18:51:59.024064] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:58.582 [2024-07-25 18:51:59.024174] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:58.582 [2024-07-25 18:51:59.024258] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:58.582 [2024-07-25 18:51:59.024268] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:24:58.582 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.582 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:24:58.841 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:24:58.841 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:24:58.841 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:58.841 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:59.100 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:59.100 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:59.360 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:59.360 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:59.360 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:59.360 18:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:59.619 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:59.619 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:59.878 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:00.137 [2024-07-25 18:52:00.536026] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:00.137 [2024-07-25 18:52:00.538356] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:00.137 [2024-07-25 18:52:00.538431] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:00.137 [2024-07-25 18:52:00.538463] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:00.137 [2024-07-25 18:52:00.538513] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:00.137 [2024-07-25 18:52:00.538613] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:00.137 [2024-07-25 18:52:00.538664] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:00.137 [2024-07-25 18:52:00.538698] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:00.137 [2024-07-25 18:52:00.538723] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.137 [2024-07-25 18:52:00.538732] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:25:00.137 request: 00:25:00.137 { 00:25:00.137 "name": "raid_bdev1", 00:25:00.137 "raid_level": "concat", 00:25:00.137 "base_bdevs": [ 00:25:00.137 "malloc1", 00:25:00.137 "malloc2", 00:25:00.137 "malloc3", 00:25:00.137 "malloc4" 00:25:00.137 ], 00:25:00.137 "strip_size_kb": 64, 00:25:00.137 "superblock": false, 00:25:00.137 "method": "bdev_raid_create", 00:25:00.137 "req_id": 1 00:25:00.137 } 00:25:00.137 Got JSON-RPC error response 00:25:00.137 response: 00:25:00.137 { 00:25:00.137 "code": -17, 00:25:00.137 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:00.137 } 00:25:00.137 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:25:00.137 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.137 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.137 18:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.137 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.137 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:00.397 [2024-07-25 18:52:00.892026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:00.397 [2024-07-25 18:52:00.892108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.397 [2024-07-25 18:52:00.892160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:00.397 [2024-07-25 18:52:00.892207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.397 [2024-07-25 18:52:00.894893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.397 [2024-07-25 18:52:00.894942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:00.397 [2024-07-25 18:52:00.895064] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:00.397 [2024-07-25 18:52:00.895117] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:00.397 pt1 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.397 18:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.657 18:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.657 "name": "raid_bdev1", 00:25:00.657 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:25:00.657 "strip_size_kb": 64, 00:25:00.657 "state": "configuring", 00:25:00.657 "raid_level": "concat", 00:25:00.657 "superblock": true, 00:25:00.657 "num_base_bdevs": 4, 00:25:00.657 "num_base_bdevs_discovered": 1, 00:25:00.657 "num_base_bdevs_operational": 4, 00:25:00.657 "base_bdevs_list": [ 00:25:00.657 { 00:25:00.657 "name": "pt1", 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:00.657 "is_configured": true, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 }, 00:25:00.657 { 00:25:00.657 "name": null, 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:00.657 "is_configured": false, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 }, 00:25:00.657 { 00:25:00.657 "name": null, 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:00.657 "is_configured": false, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 }, 00:25:00.657 { 00:25:00.657 "name": null, 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:00.657 "is_configured": false, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 } 00:25:00.657 ] 00:25:00.657 }' 00:25:00.657 18:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.657 18:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.224 18:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:25:01.224 18:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:01.482 [2024-07-25 18:52:01.866266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:01.482 [2024-07-25 18:52:01.866388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.482 [2024-07-25 18:52:01.866440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:01.482 [2024-07-25 18:52:01.866482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.482 [2024-07-25 18:52:01.867045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.482 [2024-07-25 18:52:01.867084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:01.482 [2024-07-25 18:52:01.867214] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:01.482 [2024-07-25 18:52:01.867239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:01.482 pt2 00:25:01.482 18:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:01.482 [2024-07-25 18:52:02.046309] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.741 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.999 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.999 "name": "raid_bdev1", 00:25:01.999 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:25:01.999 "strip_size_kb": 64, 00:25:01.999 "state": "configuring", 00:25:01.999 "raid_level": "concat", 00:25:01.999 "superblock": true, 00:25:01.999 "num_base_bdevs": 4, 00:25:01.999 "num_base_bdevs_discovered": 1, 00:25:01.999 "num_base_bdevs_operational": 4, 00:25:01.999 "base_bdevs_list": [ 00:25:01.999 { 00:25:01.999 "name": "pt1", 00:25:01.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.999 "is_configured": true, 00:25:01.999 "data_offset": 2048, 00:25:01.999 "data_size": 63488 00:25:01.999 }, 00:25:01.999 { 00:25:01.999 "name": null, 00:25:01.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.999 "is_configured": false, 00:25:01.999 "data_offset": 2048, 00:25:01.999 "data_size": 63488 00:25:01.999 }, 00:25:01.999 { 00:25:01.999 "name": null, 00:25:01.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:01.999 "is_configured": false, 00:25:01.999 "data_offset": 2048, 00:25:01.999 "data_size": 63488 00:25:01.999 }, 00:25:01.999 { 00:25:01.999 "name": null, 00:25:01.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:01.999 "is_configured": false, 00:25:01.999 "data_offset": 2048, 00:25:01.999 "data_size": 63488 00:25:01.999 } 00:25:01.999 ] 00:25:01.999 }' 00:25:01.999 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.999 18:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.567 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:25:02.567 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:02.567 18:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:02.567 [2024-07-25 18:52:03.114460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:02.567 [2024-07-25 18:52:03.114569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.567 [2024-07-25 18:52:03.114609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:02.567 [2024-07-25 18:52:03.114658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.567 [2024-07-25 18:52:03.115194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.567 [2024-07-25 18:52:03.115238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:02.567 [2024-07-25 18:52:03.115353] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:02.567 [2024-07-25 18:52:03.115376] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:02.567 pt2 00:25:02.567 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:25:02.567 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:02.567 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:02.835 [2024-07-25 18:52:03.294516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:02.835 [2024-07-25 18:52:03.294607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.835 [2024-07-25 18:52:03.294635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:02.835 [2024-07-25 18:52:03.294689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.835 [2024-07-25 18:52:03.295177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.835 [2024-07-25 18:52:03.295220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:02.835 [2024-07-25 18:52:03.295332] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:02.835 [2024-07-25 18:52:03.295353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:02.835 pt3 00:25:02.835 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:25:02.835 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:02.835 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:03.096 [2024-07-25 18:52:03.470520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:03.096 [2024-07-25 18:52:03.470592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.096 [2024-07-25 18:52:03.470640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:03.096 [2024-07-25 18:52:03.470688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.096 [2024-07-25 18:52:03.471173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.096 [2024-07-25 18:52:03.471217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:03.096 [2024-07-25 18:52:03.471321] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:03.096 [2024-07-25 18:52:03.471349] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:03.096 [2024-07-25 18:52:03.471481] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:25:03.096 [2024-07-25 18:52:03.471491] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:03.096 [2024-07-25 18:52:03.471575] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:03.096 [2024-07-25 18:52:03.471905] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:25:03.096 [2024-07-25 18:52:03.471926] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:25:03.096 [2024-07-25 18:52:03.472050] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.096 pt4 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.096 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.354 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.354 "name": "raid_bdev1", 00:25:03.354 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:25:03.354 "strip_size_kb": 64, 00:25:03.354 "state": "online", 00:25:03.354 "raid_level": "concat", 00:25:03.354 "superblock": true, 00:25:03.354 "num_base_bdevs": 4, 00:25:03.354 "num_base_bdevs_discovered": 4, 00:25:03.354 "num_base_bdevs_operational": 4, 00:25:03.354 "base_bdevs_list": [ 00:25:03.354 { 00:25:03.354 "name": "pt1", 00:25:03.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.354 "is_configured": true, 00:25:03.354 "data_offset": 2048, 00:25:03.354 "data_size": 63488 00:25:03.354 }, 00:25:03.354 { 00:25:03.354 "name": "pt2", 00:25:03.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.354 "is_configured": true, 00:25:03.354 "data_offset": 2048, 00:25:03.354 "data_size": 63488 00:25:03.354 }, 00:25:03.354 { 00:25:03.354 "name": "pt3", 00:25:03.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:03.354 "is_configured": true, 00:25:03.354 "data_offset": 2048, 00:25:03.354 "data_size": 63488 00:25:03.354 }, 00:25:03.354 { 00:25:03.354 "name": "pt4", 00:25:03.354 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:03.354 "is_configured": true, 00:25:03.354 "data_offset": 2048, 00:25:03.354 "data_size": 63488 00:25:03.354 } 00:25:03.354 ] 00:25:03.354 }' 00:25:03.354 18:52:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.354 18:52:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:03.921 [2024-07-25 18:52:04.402962] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:03.921 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:03.921 "name": "raid_bdev1", 00:25:03.921 "aliases": [ 00:25:03.921 "0db38e49-373a-4341-a565-675a3e26a23a" 00:25:03.921 ], 00:25:03.921 "product_name": "Raid Volume", 00:25:03.921 "block_size": 512, 00:25:03.921 "num_blocks": 253952, 00:25:03.921 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:25:03.921 "assigned_rate_limits": { 00:25:03.921 "rw_ios_per_sec": 0, 00:25:03.921 "rw_mbytes_per_sec": 0, 00:25:03.921 "r_mbytes_per_sec": 0, 00:25:03.921 "w_mbytes_per_sec": 0 00:25:03.921 }, 00:25:03.921 "claimed": false, 00:25:03.921 "zoned": false, 00:25:03.921 "supported_io_types": { 00:25:03.921 "read": true, 00:25:03.921 "write": true, 00:25:03.921 "unmap": true, 00:25:03.921 "flush": true, 00:25:03.921 "reset": true, 00:25:03.921 "nvme_admin": false, 00:25:03.921 "nvme_io": false, 00:25:03.921 "nvme_io_md": false, 00:25:03.921 "write_zeroes": true, 00:25:03.921 "zcopy": false, 00:25:03.921 "get_zone_info": false, 00:25:03.921 "zone_management": false, 00:25:03.921 "zone_append": false, 00:25:03.921 "compare": false, 00:25:03.922 "compare_and_write": false, 00:25:03.922 "abort": false, 00:25:03.922 "seek_hole": false, 00:25:03.922 "seek_data": false, 00:25:03.922 "copy": false, 00:25:03.922 "nvme_iov_md": false 00:25:03.922 }, 00:25:03.922 "memory_domains": [ 00:25:03.922 { 00:25:03.922 "dma_device_id": "system", 00:25:03.922 "dma_device_type": 1 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.922 "dma_device_type": 2 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "dma_device_id": "system", 00:25:03.922 "dma_device_type": 1 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.922 "dma_device_type": 2 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "dma_device_id": "system", 00:25:03.922 "dma_device_type": 1 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.922 "dma_device_type": 2 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "dma_device_id": "system", 00:25:03.922 "dma_device_type": 1 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.922 "dma_device_type": 2 00:25:03.922 } 00:25:03.922 ], 00:25:03.922 "driver_specific": { 00:25:03.922 "raid": { 00:25:03.922 "uuid": "0db38e49-373a-4341-a565-675a3e26a23a", 00:25:03.922 "strip_size_kb": 64, 00:25:03.922 "state": "online", 00:25:03.922 "raid_level": "concat", 00:25:03.922 "superblock": true, 00:25:03.922 "num_base_bdevs": 4, 00:25:03.922 "num_base_bdevs_discovered": 4, 00:25:03.922 "num_base_bdevs_operational": 4, 00:25:03.922 "base_bdevs_list": [ 00:25:03.922 { 00:25:03.922 "name": "pt1", 00:25:03.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.922 "is_configured": true, 00:25:03.922 "data_offset": 2048, 00:25:03.922 "data_size": 63488 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "name": "pt2", 00:25:03.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.922 "is_configured": true, 00:25:03.922 "data_offset": 2048, 00:25:03.922 "data_size": 63488 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "name": "pt3", 00:25:03.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:03.922 "is_configured": true, 00:25:03.922 "data_offset": 2048, 00:25:03.922 "data_size": 63488 00:25:03.922 }, 00:25:03.922 { 00:25:03.922 "name": "pt4", 00:25:03.922 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:03.922 "is_configured": true, 00:25:03.922 "data_offset": 2048, 00:25:03.922 "data_size": 63488 00:25:03.922 } 00:25:03.922 ] 00:25:03.922 } 00:25:03.922 } 00:25:03.922 }' 00:25:03.922 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:03.922 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:03.922 pt2 00:25:03.922 pt3 00:25:03.922 pt4' 00:25:03.922 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:03.922 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:03.922 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:04.180 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:04.180 "name": "pt1", 00:25:04.180 "aliases": [ 00:25:04.180 "00000000-0000-0000-0000-000000000001" 00:25:04.180 ], 00:25:04.180 "product_name": "passthru", 00:25:04.180 "block_size": 512, 00:25:04.180 "num_blocks": 65536, 00:25:04.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:04.180 "assigned_rate_limits": { 00:25:04.180 "rw_ios_per_sec": 0, 00:25:04.180 "rw_mbytes_per_sec": 0, 00:25:04.180 "r_mbytes_per_sec": 0, 00:25:04.180 "w_mbytes_per_sec": 0 00:25:04.180 }, 00:25:04.180 "claimed": true, 00:25:04.180 "claim_type": "exclusive_write", 00:25:04.180 "zoned": false, 00:25:04.180 "supported_io_types": { 00:25:04.180 "read": true, 00:25:04.180 "write": true, 00:25:04.180 "unmap": true, 00:25:04.180 "flush": true, 00:25:04.180 "reset": true, 00:25:04.180 "nvme_admin": false, 00:25:04.180 "nvme_io": false, 00:25:04.180 "nvme_io_md": false, 00:25:04.180 "write_zeroes": true, 00:25:04.180 "zcopy": true, 00:25:04.180 "get_zone_info": false, 00:25:04.180 "zone_management": false, 00:25:04.180 "zone_append": false, 00:25:04.180 "compare": false, 00:25:04.180 "compare_and_write": false, 00:25:04.180 "abort": true, 00:25:04.180 "seek_hole": false, 00:25:04.180 "seek_data": false, 00:25:04.180 "copy": true, 00:25:04.180 "nvme_iov_md": false 00:25:04.180 }, 00:25:04.180 "memory_domains": [ 00:25:04.180 { 00:25:04.180 "dma_device_id": "system", 00:25:04.180 "dma_device_type": 1 00:25:04.180 }, 00:25:04.180 { 00:25:04.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.180 "dma_device_type": 2 00:25:04.180 } 00:25:04.180 ], 00:25:04.180 "driver_specific": { 00:25:04.180 "passthru": { 00:25:04.180 "name": "pt1", 00:25:04.180 "base_bdev_name": "malloc1" 00:25:04.180 } 00:25:04.180 } 00:25:04.180 }' 00:25:04.180 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.180 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.180 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:04.180 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.180 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:04.439 18:52:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:04.697 "name": "pt2", 00:25:04.697 "aliases": [ 00:25:04.697 "00000000-0000-0000-0000-000000000002" 00:25:04.697 ], 00:25:04.697 "product_name": "passthru", 00:25:04.697 "block_size": 512, 00:25:04.697 "num_blocks": 65536, 00:25:04.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:04.697 "assigned_rate_limits": { 00:25:04.697 "rw_ios_per_sec": 0, 00:25:04.697 "rw_mbytes_per_sec": 0, 00:25:04.697 "r_mbytes_per_sec": 0, 00:25:04.697 "w_mbytes_per_sec": 0 00:25:04.697 }, 00:25:04.697 "claimed": true, 00:25:04.697 "claim_type": "exclusive_write", 00:25:04.697 "zoned": false, 00:25:04.697 "supported_io_types": { 00:25:04.697 "read": true, 00:25:04.697 "write": true, 00:25:04.697 "unmap": true, 00:25:04.697 "flush": true, 00:25:04.697 "reset": true, 00:25:04.697 "nvme_admin": false, 00:25:04.697 "nvme_io": false, 00:25:04.697 "nvme_io_md": false, 00:25:04.697 "write_zeroes": true, 00:25:04.697 "zcopy": true, 00:25:04.697 "get_zone_info": false, 00:25:04.697 "zone_management": false, 00:25:04.697 "zone_append": false, 00:25:04.697 "compare": false, 00:25:04.697 "compare_and_write": false, 00:25:04.697 "abort": true, 00:25:04.697 "seek_hole": false, 00:25:04.697 "seek_data": false, 00:25:04.697 "copy": true, 00:25:04.697 "nvme_iov_md": false 00:25:04.697 }, 00:25:04.697 "memory_domains": [ 00:25:04.697 { 00:25:04.697 "dma_device_id": "system", 00:25:04.697 "dma_device_type": 1 00:25:04.697 }, 00:25:04.697 { 00:25:04.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.697 "dma_device_type": 2 00:25:04.697 } 00:25:04.697 ], 00:25:04.697 "driver_specific": { 00:25:04.697 "passthru": { 00:25:04.697 "name": "pt2", 00:25:04.697 "base_bdev_name": "malloc2" 00:25:04.697 } 00:25:04.697 } 00:25:04.697 }' 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:04.697 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:04.956 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:05.214 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:05.214 "name": "pt3", 00:25:05.214 "aliases": [ 00:25:05.214 "00000000-0000-0000-0000-000000000003" 00:25:05.214 ], 00:25:05.214 "product_name": "passthru", 00:25:05.214 "block_size": 512, 00:25:05.214 "num_blocks": 65536, 00:25:05.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:05.214 "assigned_rate_limits": { 00:25:05.214 "rw_ios_per_sec": 0, 00:25:05.214 "rw_mbytes_per_sec": 0, 00:25:05.214 "r_mbytes_per_sec": 0, 00:25:05.214 "w_mbytes_per_sec": 0 00:25:05.214 }, 00:25:05.214 "claimed": true, 00:25:05.214 "claim_type": "exclusive_write", 00:25:05.214 "zoned": false, 00:25:05.214 "supported_io_types": { 00:25:05.214 "read": true, 00:25:05.214 "write": true, 00:25:05.214 "unmap": true, 00:25:05.214 "flush": true, 00:25:05.214 "reset": true, 00:25:05.214 "nvme_admin": false, 00:25:05.214 "nvme_io": false, 00:25:05.214 "nvme_io_md": false, 00:25:05.214 "write_zeroes": true, 00:25:05.214 "zcopy": true, 00:25:05.214 "get_zone_info": false, 00:25:05.214 "zone_management": false, 00:25:05.214 "zone_append": false, 00:25:05.214 "compare": false, 00:25:05.214 "compare_and_write": false, 00:25:05.214 "abort": true, 00:25:05.214 "seek_hole": false, 00:25:05.214 "seek_data": false, 00:25:05.214 "copy": true, 00:25:05.214 "nvme_iov_md": false 00:25:05.214 }, 00:25:05.214 "memory_domains": [ 00:25:05.214 { 00:25:05.214 "dma_device_id": "system", 00:25:05.214 "dma_device_type": 1 00:25:05.214 }, 00:25:05.214 { 00:25:05.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.214 "dma_device_type": 2 00:25:05.214 } 00:25:05.214 ], 00:25:05.214 "driver_specific": { 00:25:05.214 "passthru": { 00:25:05.214 "name": "pt3", 00:25:05.214 "base_bdev_name": "malloc3" 00:25:05.214 } 00:25:05.214 } 00:25:05.214 }' 00:25:05.214 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.214 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.214 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:05.214 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.472 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.472 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:05.472 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.472 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.472 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:05.472 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.472 18:52:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.472 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:05.472 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:05.472 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:05.472 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:05.730 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:05.730 "name": "pt4", 00:25:05.730 "aliases": [ 00:25:05.730 "00000000-0000-0000-0000-000000000004" 00:25:05.730 ], 00:25:05.730 "product_name": "passthru", 00:25:05.730 "block_size": 512, 00:25:05.730 "num_blocks": 65536, 00:25:05.730 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:05.730 "assigned_rate_limits": { 00:25:05.730 "rw_ios_per_sec": 0, 00:25:05.730 "rw_mbytes_per_sec": 0, 00:25:05.730 "r_mbytes_per_sec": 0, 00:25:05.730 "w_mbytes_per_sec": 0 00:25:05.730 }, 00:25:05.730 "claimed": true, 00:25:05.730 "claim_type": "exclusive_write", 00:25:05.730 "zoned": false, 00:25:05.730 "supported_io_types": { 00:25:05.730 "read": true, 00:25:05.730 "write": true, 00:25:05.730 "unmap": true, 00:25:05.730 "flush": true, 00:25:05.730 "reset": true, 00:25:05.730 "nvme_admin": false, 00:25:05.730 "nvme_io": false, 00:25:05.730 "nvme_io_md": false, 00:25:05.730 "write_zeroes": true, 00:25:05.730 "zcopy": true, 00:25:05.730 "get_zone_info": false, 00:25:05.730 "zone_management": false, 00:25:05.730 "zone_append": false, 00:25:05.730 "compare": false, 00:25:05.730 "compare_and_write": false, 00:25:05.730 "abort": true, 00:25:05.730 "seek_hole": false, 00:25:05.730 "seek_data": false, 00:25:05.730 "copy": true, 00:25:05.730 "nvme_iov_md": false 00:25:05.730 }, 00:25:05.730 "memory_domains": [ 00:25:05.730 { 00:25:05.730 "dma_device_id": "system", 00:25:05.730 "dma_device_type": 1 00:25:05.730 }, 00:25:05.730 { 00:25:05.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.730 "dma_device_type": 2 00:25:05.730 } 00:25:05.730 ], 00:25:05.730 "driver_specific": { 00:25:05.730 "passthru": { 00:25:05.730 "name": "pt4", 00:25:05.730 "base_bdev_name": "malloc4" 00:25:05.730 } 00:25:05.730 } 00:25:05.730 }' 00:25:05.730 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.730 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.730 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:05.730 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:05.988 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:25:06.246 [2024-07-25 18:52:06.775386] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 0db38e49-373a-4341-a565-675a3e26a23a '!=' 0db38e49-373a-4341-a565-675a3e26a23a ']' 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 138997 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 138997 ']' 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 138997 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 138997 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.246 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.504 killing process with pid 138997 00:25:06.504 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 138997' 00:25:06.504 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 138997 00:25:06.505 [2024-07-25 18:52:06.823489] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:06.505 [2024-07-25 18:52:06.823564] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:06.505 [2024-07-25 18:52:06.823638] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:06.505 [2024-07-25 18:52:06.823647] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:25:06.505 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 138997 00:25:06.763 [2024-07-25 18:52:07.167567] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:08.141 ************************************ 00:25:08.141 END TEST raid_superblock_test 00:25:08.141 ************************************ 00:25:08.141 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:25:08.141 00:25:08.141 real 0m15.905s 00:25:08.141 user 0m27.505s 00:25:08.141 sys 0m2.764s 00:25:08.141 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:08.141 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.141 18:52:08 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:25:08.141 18:52:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:08.141 18:52:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:08.141 18:52:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:08.141 ************************************ 00:25:08.141 START TEST raid_read_error_test 00:25:08.141 ************************************ 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev4 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Y9basOrcAo 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=139532 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 139532 /var/tmp/spdk-raid.sock 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 139532 ']' 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.141 18:52:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.141 [2024-07-25 18:52:08.515350] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:08.141 [2024-07-25 18:52:08.515582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139532 ] 00:25:08.141 [2024-07-25 18:52:08.702683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.709 [2024-07-25 18:52:09.008078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.709 [2024-07-25 18:52:09.270170] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:08.967 18:52:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.967 18:52:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:08.967 18:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:08.967 18:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:09.237 BaseBdev1_malloc 00:25:09.237 18:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:09.527 true 00:25:09.527 18:52:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:09.527 [2024-07-25 18:52:10.087431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:09.527 [2024-07-25 18:52:10.087567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.527 [2024-07-25 18:52:10.087610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:09.527 [2024-07-25 18:52:10.087639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.527 [2024-07-25 18:52:10.090435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.527 [2024-07-25 18:52:10.090514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:09.527 BaseBdev1 00:25:09.786 18:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:09.786 18:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:09.786 BaseBdev2_malloc 00:25:09.786 18:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:10.044 true 00:25:10.044 18:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:10.303 [2024-07-25 18:52:10.678976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:10.303 [2024-07-25 18:52:10.679135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.303 [2024-07-25 18:52:10.679180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:10.303 [2024-07-25 18:52:10.679202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.303 [2024-07-25 18:52:10.681889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.303 [2024-07-25 18:52:10.681951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:10.303 BaseBdev2 00:25:10.303 18:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:10.303 18:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:10.562 BaseBdev3_malloc 00:25:10.562 18:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:10.821 true 00:25:10.821 18:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:10.821 [2024-07-25 18:52:11.334877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:10.821 [2024-07-25 18:52:11.335016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.821 [2024-07-25 18:52:11.335083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:10.821 [2024-07-25 18:52:11.335110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.821 [2024-07-25 18:52:11.337758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.821 [2024-07-25 18:52:11.337854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:10.821 BaseBdev3 00:25:10.821 18:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:10.821 18:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:11.079 BaseBdev4_malloc 00:25:11.337 18:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:11.337 true 00:25:11.337 18:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:11.595 [2024-07-25 18:52:11.990548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:11.595 [2024-07-25 18:52:11.990664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.595 [2024-07-25 18:52:11.990731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:11.595 [2024-07-25 18:52:11.990759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.595 [2024-07-25 18:52:11.993406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.595 [2024-07-25 18:52:11.993458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:11.595 BaseBdev4 00:25:11.595 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:11.595 [2024-07-25 18:52:12.166642] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:11.595 [2024-07-25 18:52:12.169008] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:11.595 [2024-07-25 18:52:12.169096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:11.595 [2024-07-25 18:52:12.169149] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:11.595 [2024-07-25 18:52:12.169393] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:25:11.595 [2024-07-25 18:52:12.169405] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:11.595 [2024-07-25 18:52:12.169528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:11.595 [2024-07-25 18:52:12.169954] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:25:11.595 [2024-07-25 18:52:12.169965] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:25:11.595 [2024-07-25 18:52:12.170130] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.853 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.853 "name": "raid_bdev1", 00:25:11.854 "uuid": "15fcd646-eb49-4cbe-aea4-81ee24d8b130", 00:25:11.854 "strip_size_kb": 64, 00:25:11.854 "state": "online", 00:25:11.854 "raid_level": "concat", 00:25:11.854 "superblock": true, 00:25:11.854 "num_base_bdevs": 4, 00:25:11.854 "num_base_bdevs_discovered": 4, 00:25:11.854 "num_base_bdevs_operational": 4, 00:25:11.854 "base_bdevs_list": [ 00:25:11.854 { 00:25:11.854 "name": "BaseBdev1", 00:25:11.854 "uuid": "986e3746-19ca-557a-9020-59e04a5699e7", 00:25:11.854 "is_configured": true, 00:25:11.854 "data_offset": 2048, 00:25:11.854 "data_size": 63488 00:25:11.854 }, 00:25:11.854 { 00:25:11.854 "name": "BaseBdev2", 00:25:11.854 "uuid": "7e161f38-fdce-58eb-a3e7-7127fb01519b", 00:25:11.854 "is_configured": true, 00:25:11.854 "data_offset": 2048, 00:25:11.854 "data_size": 63488 00:25:11.854 }, 00:25:11.854 { 00:25:11.854 "name": "BaseBdev3", 00:25:11.854 "uuid": "a1cdeec5-ed54-5a28-a27f-fe880a4dff9f", 00:25:11.854 "is_configured": true, 00:25:11.854 "data_offset": 2048, 00:25:11.854 "data_size": 63488 00:25:11.854 }, 00:25:11.854 { 00:25:11.854 "name": "BaseBdev4", 00:25:11.854 "uuid": "aa74002f-ec0b-523a-ad25-a5dd4866a3ae", 00:25:11.854 "is_configured": true, 00:25:11.854 "data_offset": 2048, 00:25:11.854 "data_size": 63488 00:25:11.854 } 00:25:11.854 ] 00:25:11.854 }' 00:25:11.854 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.854 18:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.419 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:25:12.419 18:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:12.677 [2024-07-25 18:52:13.036521] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:13.613 18:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.613 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.871 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.871 "name": "raid_bdev1", 00:25:13.871 "uuid": "15fcd646-eb49-4cbe-aea4-81ee24d8b130", 00:25:13.871 "strip_size_kb": 64, 00:25:13.871 "state": "online", 00:25:13.871 "raid_level": "concat", 00:25:13.871 "superblock": true, 00:25:13.871 "num_base_bdevs": 4, 00:25:13.871 "num_base_bdevs_discovered": 4, 00:25:13.871 "num_base_bdevs_operational": 4, 00:25:13.871 "base_bdevs_list": [ 00:25:13.871 { 00:25:13.871 "name": "BaseBdev1", 00:25:13.871 "uuid": "986e3746-19ca-557a-9020-59e04a5699e7", 00:25:13.871 "is_configured": true, 00:25:13.871 "data_offset": 2048, 00:25:13.871 "data_size": 63488 00:25:13.871 }, 00:25:13.871 { 00:25:13.871 "name": "BaseBdev2", 00:25:13.871 "uuid": "7e161f38-fdce-58eb-a3e7-7127fb01519b", 00:25:13.871 "is_configured": true, 00:25:13.871 "data_offset": 2048, 00:25:13.871 "data_size": 63488 00:25:13.871 }, 00:25:13.871 { 00:25:13.871 "name": "BaseBdev3", 00:25:13.871 "uuid": "a1cdeec5-ed54-5a28-a27f-fe880a4dff9f", 00:25:13.871 "is_configured": true, 00:25:13.871 "data_offset": 2048, 00:25:13.871 "data_size": 63488 00:25:13.871 }, 00:25:13.871 { 00:25:13.871 "name": "BaseBdev4", 00:25:13.871 "uuid": "aa74002f-ec0b-523a-ad25-a5dd4866a3ae", 00:25:13.871 "is_configured": true, 00:25:13.871 "data_offset": 2048, 00:25:13.871 "data_size": 63488 00:25:13.871 } 00:25:13.871 ] 00:25:13.871 }' 00:25:13.871 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.871 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.437 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:14.696 [2024-07-25 18:52:15.245106] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:14.696 [2024-07-25 18:52:15.245158] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.696 [2024-07-25 18:52:15.247742] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.696 [2024-07-25 18:52:15.247799] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:14.696 [2024-07-25 18:52:15.247845] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.696 [2024-07-25 18:52:15.247853] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:25:14.696 0 00:25:14.696 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 139532 00:25:14.696 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 139532 ']' 00:25:14.696 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 139532 00:25:14.696 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:25:14.953 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:14.953 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139532 00:25:14.953 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:14.953 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:14.953 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139532' 00:25:14.953 killing process with pid 139532 00:25:14.953 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 139532 00:25:14.953 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 139532 00:25:14.953 [2024-07-25 18:52:15.290654] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:15.211 [2024-07-25 18:52:15.646727] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Y9basOrcAo 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:25:17.111 ************************************ 00:25:17.111 END TEST raid_read_error_test 00:25:17.111 ************************************ 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.45 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.45 != \0\.\0\0 ]] 00:25:17.111 00:25:17.111 real 0m8.768s 00:25:17.111 user 0m12.606s 00:25:17.111 sys 0m1.365s 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.111 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.111 18:52:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:25:17.111 18:52:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:17.111 18:52:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.111 18:52:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:17.111 ************************************ 00:25:17.111 START TEST raid_write_error_test 00:25:17.111 ************************************ 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev4 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.wqkkcvTudb 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=139755 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 139755 /var/tmp/spdk-raid.sock 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 139755 ']' 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:17.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.111 18:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.111 [2024-07-25 18:52:17.367322] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:17.111 [2024-07-25 18:52:17.368501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139755 ] 00:25:17.111 [2024-07-25 18:52:17.554208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.369 [2024-07-25 18:52:17.793429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.626 [2024-07-25 18:52:18.067406] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:17.884 18:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.884 18:52:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:17.884 18:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:17.884 18:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:18.141 BaseBdev1_malloc 00:25:18.141 18:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:18.141 true 00:25:18.399 18:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:18.399 [2024-07-25 18:52:18.951133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:18.399 [2024-07-25 18:52:18.951429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.399 [2024-07-25 18:52:18.951519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:18.399 [2024-07-25 18:52:18.951618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.399 [2024-07-25 18:52:18.956956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.399 [2024-07-25 18:52:18.957131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:18.399 BaseBdev1 00:25:18.399 18:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:18.399 18:52:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:18.656 BaseBdev2_malloc 00:25:18.656 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:18.914 true 00:25:18.914 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:19.172 [2024-07-25 18:52:19.611287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:19.172 [2024-07-25 18:52:19.611609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.172 [2024-07-25 18:52:19.611689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:19.172 [2024-07-25 18:52:19.611795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.172 [2024-07-25 18:52:19.614474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.172 [2024-07-25 18:52:19.614641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:19.172 BaseBdev2 00:25:19.172 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:19.172 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:19.431 BaseBdev3_malloc 00:25:19.431 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:19.689 true 00:25:19.689 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:19.689 [2024-07-25 18:52:20.216396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:19.689 [2024-07-25 18:52:20.216711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.689 [2024-07-25 18:52:20.216787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:19.689 [2024-07-25 18:52:20.216897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.689 [2024-07-25 18:52:20.219589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.689 [2024-07-25 18:52:20.219755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:19.689 BaseBdev3 00:25:19.689 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:19.689 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:19.947 BaseBdev4_malloc 00:25:19.947 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:20.205 true 00:25:20.205 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:20.464 [2024-07-25 18:52:20.801343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:20.464 [2024-07-25 18:52:20.801651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.464 [2024-07-25 18:52:20.801749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:20.464 [2024-07-25 18:52:20.801879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.464 [2024-07-25 18:52:20.804592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.464 [2024-07-25 18:52:20.804754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:20.464 BaseBdev4 00:25:20.464 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:20.464 [2024-07-25 18:52:20.989564] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:20.464 [2024-07-25 18:52:20.992088] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:20.464 [2024-07-25 18:52:20.992321] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:20.464 [2024-07-25 18:52:20.992412] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:20.464 [2024-07-25 18:52:20.992717] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:25:20.464 [2024-07-25 18:52:20.992813] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:20.464 [2024-07-25 18:52:20.992989] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:20.464 [2024-07-25 18:52:20.993425] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:25:20.464 [2024-07-25 18:52:20.993539] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:25:20.464 [2024-07-25 18:52:20.993834] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.464 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.723 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.723 "name": "raid_bdev1", 00:25:20.723 "uuid": "945d74cb-067a-4dd9-81d4-a9f8bffdee24", 00:25:20.723 "strip_size_kb": 64, 00:25:20.723 "state": "online", 00:25:20.723 "raid_level": "concat", 00:25:20.723 "superblock": true, 00:25:20.723 "num_base_bdevs": 4, 00:25:20.723 "num_base_bdevs_discovered": 4, 00:25:20.723 "num_base_bdevs_operational": 4, 00:25:20.723 "base_bdevs_list": [ 00:25:20.723 { 00:25:20.723 "name": "BaseBdev1", 00:25:20.723 "uuid": "06d68d5a-8deb-5242-b456-da49ff480065", 00:25:20.723 "is_configured": true, 00:25:20.723 "data_offset": 2048, 00:25:20.723 "data_size": 63488 00:25:20.723 }, 00:25:20.723 { 00:25:20.723 "name": "BaseBdev2", 00:25:20.723 "uuid": "e0e063e6-d2bc-5e6f-a4f9-4e8875b35be0", 00:25:20.723 "is_configured": true, 00:25:20.723 "data_offset": 2048, 00:25:20.723 "data_size": 63488 00:25:20.723 }, 00:25:20.723 { 00:25:20.723 "name": "BaseBdev3", 00:25:20.723 "uuid": "6bf373c6-a16b-5481-9606-4bdd8b00eb98", 00:25:20.723 "is_configured": true, 00:25:20.723 "data_offset": 2048, 00:25:20.723 "data_size": 63488 00:25:20.723 }, 00:25:20.723 { 00:25:20.723 "name": "BaseBdev4", 00:25:20.723 "uuid": "6dd223fd-5a90-5527-8ede-ed72f1f7a164", 00:25:20.723 "is_configured": true, 00:25:20.723 "data_offset": 2048, 00:25:20.723 "data_size": 63488 00:25:20.723 } 00:25:20.723 ] 00:25:20.723 }' 00:25:20.723 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.723 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.290 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:25:21.290 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:21.290 [2024-07-25 18:52:21.827621] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:22.224 18:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.483 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.741 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.741 "name": "raid_bdev1", 00:25:22.741 "uuid": "945d74cb-067a-4dd9-81d4-a9f8bffdee24", 00:25:22.741 "strip_size_kb": 64, 00:25:22.741 "state": "online", 00:25:22.741 "raid_level": "concat", 00:25:22.741 "superblock": true, 00:25:22.741 "num_base_bdevs": 4, 00:25:22.741 "num_base_bdevs_discovered": 4, 00:25:22.741 "num_base_bdevs_operational": 4, 00:25:22.741 "base_bdevs_list": [ 00:25:22.741 { 00:25:22.741 "name": "BaseBdev1", 00:25:22.741 "uuid": "06d68d5a-8deb-5242-b456-da49ff480065", 00:25:22.741 "is_configured": true, 00:25:22.741 "data_offset": 2048, 00:25:22.741 "data_size": 63488 00:25:22.741 }, 00:25:22.741 { 00:25:22.741 "name": "BaseBdev2", 00:25:22.741 "uuid": "e0e063e6-d2bc-5e6f-a4f9-4e8875b35be0", 00:25:22.741 "is_configured": true, 00:25:22.741 "data_offset": 2048, 00:25:22.741 "data_size": 63488 00:25:22.741 }, 00:25:22.741 { 00:25:22.741 "name": "BaseBdev3", 00:25:22.741 "uuid": "6bf373c6-a16b-5481-9606-4bdd8b00eb98", 00:25:22.741 "is_configured": true, 00:25:22.741 "data_offset": 2048, 00:25:22.741 "data_size": 63488 00:25:22.741 }, 00:25:22.741 { 00:25:22.741 "name": "BaseBdev4", 00:25:22.741 "uuid": "6dd223fd-5a90-5527-8ede-ed72f1f7a164", 00:25:22.741 "is_configured": true, 00:25:22.741 "data_offset": 2048, 00:25:22.741 "data_size": 63488 00:25:22.741 } 00:25:22.741 ] 00:25:22.741 }' 00:25:22.741 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.741 18:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.305 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:23.563 [2024-07-25 18:52:24.024783] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:23.563 [2024-07-25 18:52:24.025100] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:23.563 [2024-07-25 18:52:24.027808] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.563 [2024-07-25 18:52:24.027991] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.563 [2024-07-25 18:52:24.028072] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:23.563 [2024-07-25 18:52:24.028149] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:25:23.563 0 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 139755 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 139755 ']' 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 139755 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139755 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139755' 00:25:23.563 killing process with pid 139755 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 139755 00:25:23.563 [2024-07-25 18:52:24.074224] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.563 18:52:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 139755 00:25:24.130 [2024-07-25 18:52:24.430915] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.wqkkcvTudb 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:25:25.505 ************************************ 00:25:25.505 END TEST raid_write_error_test 00:25:25.505 ************************************ 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:25:25.505 00:25:25.505 real 0m8.719s 00:25:25.505 user 0m12.420s 00:25:25.505 sys 0m1.431s 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:25.505 18:52:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.505 18:52:26 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:25:25.505 18:52:26 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:25.505 18:52:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:25.505 18:52:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:25.505 18:52:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:25.505 ************************************ 00:25:25.505 START TEST raid_state_function_test 00:25:25.505 ************************************ 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:25.505 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=139963 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 139963' 00:25:25.506 Process raid pid: 139963 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 139963 /var/tmp/spdk-raid.sock 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 139963 ']' 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:25.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.506 18:52:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.764 [2024-07-25 18:52:26.148185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:25.764 [2024-07-25 18:52:26.148602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.764 [2024-07-25 18:52:26.334779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.023 [2024-07-25 18:52:26.544188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.281 [2024-07-25 18:52:26.734853] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.539 18:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.539 18:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:25:26.539 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:26.798 [2024-07-25 18:52:27.244728] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:26.798 [2024-07-25 18:52:27.245013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:26.798 [2024-07-25 18:52:27.245107] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:26.798 [2024-07-25 18:52:27.245165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:26.798 [2024-07-25 18:52:27.245229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:26.798 [2024-07-25 18:52:27.245275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:26.798 [2024-07-25 18:52:27.245301] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:26.798 [2024-07-25 18:52:27.245390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.798 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.056 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.056 "name": "Existed_Raid", 00:25:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.056 "strip_size_kb": 0, 00:25:27.056 "state": "configuring", 00:25:27.056 "raid_level": "raid1", 00:25:27.056 "superblock": false, 00:25:27.056 "num_base_bdevs": 4, 00:25:27.056 "num_base_bdevs_discovered": 0, 00:25:27.056 "num_base_bdevs_operational": 4, 00:25:27.056 "base_bdevs_list": [ 00:25:27.056 { 00:25:27.056 "name": "BaseBdev1", 00:25:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.056 "is_configured": false, 00:25:27.056 "data_offset": 0, 00:25:27.056 "data_size": 0 00:25:27.056 }, 00:25:27.056 { 00:25:27.056 "name": "BaseBdev2", 00:25:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.056 "is_configured": false, 00:25:27.056 "data_offset": 0, 00:25:27.056 "data_size": 0 00:25:27.056 }, 00:25:27.056 { 00:25:27.056 "name": "BaseBdev3", 00:25:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.056 "is_configured": false, 00:25:27.056 "data_offset": 0, 00:25:27.056 "data_size": 0 00:25:27.056 }, 00:25:27.056 { 00:25:27.057 "name": "BaseBdev4", 00:25:27.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.057 "is_configured": false, 00:25:27.057 "data_offset": 0, 00:25:27.057 "data_size": 0 00:25:27.057 } 00:25:27.057 ] 00:25:27.057 }' 00:25:27.057 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.057 18:52:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.623 18:52:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:27.882 [2024-07-25 18:52:28.240761] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:27.882 [2024-07-25 18:52:28.241030] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:25:27.882 18:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:27.882 [2024-07-25 18:52:28.420841] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:27.882 [2024-07-25 18:52:28.421121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:27.882 [2024-07-25 18:52:28.421204] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:27.882 [2024-07-25 18:52:28.421286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:27.882 [2024-07-25 18:52:28.421355] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:27.882 [2024-07-25 18:52:28.421422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:27.882 [2024-07-25 18:52:28.421449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:27.882 [2024-07-25 18:52:28.421535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:27.882 18:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:28.141 [2024-07-25 18:52:28.640572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:28.141 BaseBdev1 00:25:28.141 18:52:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:28.141 18:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:28.141 18:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:28.141 18:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:28.141 18:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:28.141 18:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:28.141 18:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:28.399 18:52:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:28.657 [ 00:25:28.658 { 00:25:28.658 "name": "BaseBdev1", 00:25:28.658 "aliases": [ 00:25:28.658 "fa9be75c-5d7f-4bdb-90d2-a31727abb591" 00:25:28.658 ], 00:25:28.658 "product_name": "Malloc disk", 00:25:28.658 "block_size": 512, 00:25:28.658 "num_blocks": 65536, 00:25:28.658 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:28.658 "assigned_rate_limits": { 00:25:28.658 "rw_ios_per_sec": 0, 00:25:28.658 "rw_mbytes_per_sec": 0, 00:25:28.658 "r_mbytes_per_sec": 0, 00:25:28.658 "w_mbytes_per_sec": 0 00:25:28.658 }, 00:25:28.658 "claimed": true, 00:25:28.658 "claim_type": "exclusive_write", 00:25:28.658 "zoned": false, 00:25:28.658 "supported_io_types": { 00:25:28.658 "read": true, 00:25:28.658 "write": true, 00:25:28.658 "unmap": true, 00:25:28.658 "flush": true, 00:25:28.658 "reset": true, 00:25:28.658 "nvme_admin": false, 00:25:28.658 "nvme_io": false, 00:25:28.658 "nvme_io_md": false, 00:25:28.658 "write_zeroes": true, 00:25:28.658 "zcopy": true, 00:25:28.658 "get_zone_info": false, 00:25:28.658 "zone_management": false, 00:25:28.658 "zone_append": false, 00:25:28.658 "compare": false, 00:25:28.658 "compare_and_write": false, 00:25:28.658 "abort": true, 00:25:28.658 "seek_hole": false, 00:25:28.658 "seek_data": false, 00:25:28.658 "copy": true, 00:25:28.658 "nvme_iov_md": false 00:25:28.658 }, 00:25:28.658 "memory_domains": [ 00:25:28.658 { 00:25:28.658 "dma_device_id": "system", 00:25:28.658 "dma_device_type": 1 00:25:28.658 }, 00:25:28.658 { 00:25:28.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:28.658 "dma_device_type": 2 00:25:28.658 } 00:25:28.658 ], 00:25:28.658 "driver_specific": {} 00:25:28.658 } 00:25:28.658 ] 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.658 "name": "Existed_Raid", 00:25:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.658 "strip_size_kb": 0, 00:25:28.658 "state": "configuring", 00:25:28.658 "raid_level": "raid1", 00:25:28.658 "superblock": false, 00:25:28.658 "num_base_bdevs": 4, 00:25:28.658 "num_base_bdevs_discovered": 1, 00:25:28.658 "num_base_bdevs_operational": 4, 00:25:28.658 "base_bdevs_list": [ 00:25:28.658 { 00:25:28.658 "name": "BaseBdev1", 00:25:28.658 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:28.658 "is_configured": true, 00:25:28.658 "data_offset": 0, 00:25:28.658 "data_size": 65536 00:25:28.658 }, 00:25:28.658 { 00:25:28.658 "name": "BaseBdev2", 00:25:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.658 "is_configured": false, 00:25:28.658 "data_offset": 0, 00:25:28.658 "data_size": 0 00:25:28.658 }, 00:25:28.658 { 00:25:28.658 "name": "BaseBdev3", 00:25:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.658 "is_configured": false, 00:25:28.658 "data_offset": 0, 00:25:28.658 "data_size": 0 00:25:28.658 }, 00:25:28.658 { 00:25:28.658 "name": "BaseBdev4", 00:25:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.658 "is_configured": false, 00:25:28.658 "data_offset": 0, 00:25:28.658 "data_size": 0 00:25:28.658 } 00:25:28.658 ] 00:25:28.658 }' 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.658 18:52:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.224 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:29.499 [2024-07-25 18:52:29.972877] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.499 [2024-07-25 18:52:29.973082] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:25:29.499 18:52:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:29.769 [2024-07-25 18:52:30.228971] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.769 [2024-07-25 18:52:30.231400] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.769 [2024-07-25 18:52:30.231594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.769 [2024-07-25 18:52:30.231666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:29.769 [2024-07-25 18:52:30.231724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:29.769 [2024-07-25 18:52:30.231752] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:29.769 [2024-07-25 18:52:30.231827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.769 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.028 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.028 "name": "Existed_Raid", 00:25:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.028 "strip_size_kb": 0, 00:25:30.028 "state": "configuring", 00:25:30.028 "raid_level": "raid1", 00:25:30.028 "superblock": false, 00:25:30.028 "num_base_bdevs": 4, 00:25:30.028 "num_base_bdevs_discovered": 1, 00:25:30.028 "num_base_bdevs_operational": 4, 00:25:30.028 "base_bdevs_list": [ 00:25:30.028 { 00:25:30.028 "name": "BaseBdev1", 00:25:30.028 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:30.028 "is_configured": true, 00:25:30.028 "data_offset": 0, 00:25:30.028 "data_size": 65536 00:25:30.028 }, 00:25:30.028 { 00:25:30.028 "name": "BaseBdev2", 00:25:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.028 "is_configured": false, 00:25:30.028 "data_offset": 0, 00:25:30.028 "data_size": 0 00:25:30.028 }, 00:25:30.028 { 00:25:30.028 "name": "BaseBdev3", 00:25:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.028 "is_configured": false, 00:25:30.028 "data_offset": 0, 00:25:30.028 "data_size": 0 00:25:30.028 }, 00:25:30.028 { 00:25:30.028 "name": "BaseBdev4", 00:25:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.028 "is_configured": false, 00:25:30.028 "data_offset": 0, 00:25:30.028 "data_size": 0 00:25:30.028 } 00:25:30.028 ] 00:25:30.028 }' 00:25:30.028 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.028 18:52:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.594 18:52:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:30.852 [2024-07-25 18:52:31.184171] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.852 BaseBdev2 00:25:30.852 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:30.852 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:30.852 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:30.852 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:30.852 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:30.852 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:30.852 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:31.111 [ 00:25:31.111 { 00:25:31.111 "name": "BaseBdev2", 00:25:31.111 "aliases": [ 00:25:31.111 "4df99c4e-42d2-42c3-9bc7-01ec6182be4e" 00:25:31.111 ], 00:25:31.111 "product_name": "Malloc disk", 00:25:31.111 "block_size": 512, 00:25:31.111 "num_blocks": 65536, 00:25:31.111 "uuid": "4df99c4e-42d2-42c3-9bc7-01ec6182be4e", 00:25:31.111 "assigned_rate_limits": { 00:25:31.111 "rw_ios_per_sec": 0, 00:25:31.111 "rw_mbytes_per_sec": 0, 00:25:31.111 "r_mbytes_per_sec": 0, 00:25:31.111 "w_mbytes_per_sec": 0 00:25:31.111 }, 00:25:31.111 "claimed": true, 00:25:31.111 "claim_type": "exclusive_write", 00:25:31.111 "zoned": false, 00:25:31.111 "supported_io_types": { 00:25:31.111 "read": true, 00:25:31.111 "write": true, 00:25:31.111 "unmap": true, 00:25:31.111 "flush": true, 00:25:31.111 "reset": true, 00:25:31.111 "nvme_admin": false, 00:25:31.111 "nvme_io": false, 00:25:31.111 "nvme_io_md": false, 00:25:31.111 "write_zeroes": true, 00:25:31.111 "zcopy": true, 00:25:31.111 "get_zone_info": false, 00:25:31.111 "zone_management": false, 00:25:31.111 "zone_append": false, 00:25:31.111 "compare": false, 00:25:31.111 "compare_and_write": false, 00:25:31.111 "abort": true, 00:25:31.111 "seek_hole": false, 00:25:31.111 "seek_data": false, 00:25:31.111 "copy": true, 00:25:31.111 "nvme_iov_md": false 00:25:31.111 }, 00:25:31.111 "memory_domains": [ 00:25:31.111 { 00:25:31.111 "dma_device_id": "system", 00:25:31.111 "dma_device_type": 1 00:25:31.111 }, 00:25:31.111 { 00:25:31.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.111 "dma_device_type": 2 00:25:31.111 } 00:25:31.111 ], 00:25:31.111 "driver_specific": {} 00:25:31.111 } 00:25:31.111 ] 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.111 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.369 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:31.369 "name": "Existed_Raid", 00:25:31.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.369 "strip_size_kb": 0, 00:25:31.369 "state": "configuring", 00:25:31.369 "raid_level": "raid1", 00:25:31.369 "superblock": false, 00:25:31.369 "num_base_bdevs": 4, 00:25:31.369 "num_base_bdevs_discovered": 2, 00:25:31.369 "num_base_bdevs_operational": 4, 00:25:31.369 "base_bdevs_list": [ 00:25:31.369 { 00:25:31.369 "name": "BaseBdev1", 00:25:31.369 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:31.369 "is_configured": true, 00:25:31.369 "data_offset": 0, 00:25:31.369 "data_size": 65536 00:25:31.369 }, 00:25:31.369 { 00:25:31.369 "name": "BaseBdev2", 00:25:31.369 "uuid": "4df99c4e-42d2-42c3-9bc7-01ec6182be4e", 00:25:31.369 "is_configured": true, 00:25:31.369 "data_offset": 0, 00:25:31.369 "data_size": 65536 00:25:31.369 }, 00:25:31.369 { 00:25:31.369 "name": "BaseBdev3", 00:25:31.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.369 "is_configured": false, 00:25:31.369 "data_offset": 0, 00:25:31.369 "data_size": 0 00:25:31.369 }, 00:25:31.369 { 00:25:31.369 "name": "BaseBdev4", 00:25:31.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.369 "is_configured": false, 00:25:31.369 "data_offset": 0, 00:25:31.369 "data_size": 0 00:25:31.369 } 00:25:31.369 ] 00:25:31.369 }' 00:25:31.370 18:52:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:31.370 18:52:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.936 18:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:32.194 [2024-07-25 18:52:32.628871] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:32.194 BaseBdev3 00:25:32.194 18:52:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:32.194 18:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:32.194 18:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:32.194 18:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:32.194 18:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:32.194 18:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:32.194 18:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:32.453 18:52:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:32.711 [ 00:25:32.711 { 00:25:32.711 "name": "BaseBdev3", 00:25:32.711 "aliases": [ 00:25:32.711 "7b85c6b6-7d15-4feb-af3f-ea22ea07a237" 00:25:32.711 ], 00:25:32.711 "product_name": "Malloc disk", 00:25:32.711 "block_size": 512, 00:25:32.711 "num_blocks": 65536, 00:25:32.711 "uuid": "7b85c6b6-7d15-4feb-af3f-ea22ea07a237", 00:25:32.711 "assigned_rate_limits": { 00:25:32.712 "rw_ios_per_sec": 0, 00:25:32.712 "rw_mbytes_per_sec": 0, 00:25:32.712 "r_mbytes_per_sec": 0, 00:25:32.712 "w_mbytes_per_sec": 0 00:25:32.712 }, 00:25:32.712 "claimed": true, 00:25:32.712 "claim_type": "exclusive_write", 00:25:32.712 "zoned": false, 00:25:32.712 "supported_io_types": { 00:25:32.712 "read": true, 00:25:32.712 "write": true, 00:25:32.712 "unmap": true, 00:25:32.712 "flush": true, 00:25:32.712 "reset": true, 00:25:32.712 "nvme_admin": false, 00:25:32.712 "nvme_io": false, 00:25:32.712 "nvme_io_md": false, 00:25:32.712 "write_zeroes": true, 00:25:32.712 "zcopy": true, 00:25:32.712 "get_zone_info": false, 00:25:32.712 "zone_management": false, 00:25:32.712 "zone_append": false, 00:25:32.712 "compare": false, 00:25:32.712 "compare_and_write": false, 00:25:32.712 "abort": true, 00:25:32.712 "seek_hole": false, 00:25:32.712 "seek_data": false, 00:25:32.712 "copy": true, 00:25:32.712 "nvme_iov_md": false 00:25:32.712 }, 00:25:32.712 "memory_domains": [ 00:25:32.712 { 00:25:32.712 "dma_device_id": "system", 00:25:32.712 "dma_device_type": 1 00:25:32.712 }, 00:25:32.712 { 00:25:32.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.712 "dma_device_type": 2 00:25:32.712 } 00:25:32.712 ], 00:25:32.712 "driver_specific": {} 00:25:32.712 } 00:25:32.712 ] 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.712 "name": "Existed_Raid", 00:25:32.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.712 "strip_size_kb": 0, 00:25:32.712 "state": "configuring", 00:25:32.712 "raid_level": "raid1", 00:25:32.712 "superblock": false, 00:25:32.712 "num_base_bdevs": 4, 00:25:32.712 "num_base_bdevs_discovered": 3, 00:25:32.712 "num_base_bdevs_operational": 4, 00:25:32.712 "base_bdevs_list": [ 00:25:32.712 { 00:25:32.712 "name": "BaseBdev1", 00:25:32.712 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:32.712 "is_configured": true, 00:25:32.712 "data_offset": 0, 00:25:32.712 "data_size": 65536 00:25:32.712 }, 00:25:32.712 { 00:25:32.712 "name": "BaseBdev2", 00:25:32.712 "uuid": "4df99c4e-42d2-42c3-9bc7-01ec6182be4e", 00:25:32.712 "is_configured": true, 00:25:32.712 "data_offset": 0, 00:25:32.712 "data_size": 65536 00:25:32.712 }, 00:25:32.712 { 00:25:32.712 "name": "BaseBdev3", 00:25:32.712 "uuid": "7b85c6b6-7d15-4feb-af3f-ea22ea07a237", 00:25:32.712 "is_configured": true, 00:25:32.712 "data_offset": 0, 00:25:32.712 "data_size": 65536 00:25:32.712 }, 00:25:32.712 { 00:25:32.712 "name": "BaseBdev4", 00:25:32.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.712 "is_configured": false, 00:25:32.712 "data_offset": 0, 00:25:32.712 "data_size": 0 00:25:32.712 } 00:25:32.712 ] 00:25:32.712 }' 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.712 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.276 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:33.534 [2024-07-25 18:52:33.929428] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:33.535 [2024-07-25 18:52:33.929750] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:25:33.535 [2024-07-25 18:52:33.929813] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:33.535 [2024-07-25 18:52:33.930049] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:33.535 [2024-07-25 18:52:33.930538] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:25:33.535 [2024-07-25 18:52:33.930648] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:25:33.535 [2024-07-25 18:52:33.931042] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.535 BaseBdev4 00:25:33.535 18:52:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:33.535 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:33.535 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:33.535 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:33.535 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:33.535 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:33.535 18:52:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:33.793 [ 00:25:33.793 { 00:25:33.793 "name": "BaseBdev4", 00:25:33.793 "aliases": [ 00:25:33.793 "7b71ee97-1b3b-45de-97af-fd987ee8bb08" 00:25:33.793 ], 00:25:33.793 "product_name": "Malloc disk", 00:25:33.793 "block_size": 512, 00:25:33.793 "num_blocks": 65536, 00:25:33.793 "uuid": "7b71ee97-1b3b-45de-97af-fd987ee8bb08", 00:25:33.793 "assigned_rate_limits": { 00:25:33.793 "rw_ios_per_sec": 0, 00:25:33.793 "rw_mbytes_per_sec": 0, 00:25:33.793 "r_mbytes_per_sec": 0, 00:25:33.793 "w_mbytes_per_sec": 0 00:25:33.793 }, 00:25:33.793 "claimed": true, 00:25:33.793 "claim_type": "exclusive_write", 00:25:33.793 "zoned": false, 00:25:33.793 "supported_io_types": { 00:25:33.793 "read": true, 00:25:33.793 "write": true, 00:25:33.793 "unmap": true, 00:25:33.793 "flush": true, 00:25:33.793 "reset": true, 00:25:33.793 "nvme_admin": false, 00:25:33.793 "nvme_io": false, 00:25:33.793 "nvme_io_md": false, 00:25:33.793 "write_zeroes": true, 00:25:33.793 "zcopy": true, 00:25:33.793 "get_zone_info": false, 00:25:33.793 "zone_management": false, 00:25:33.793 "zone_append": false, 00:25:33.793 "compare": false, 00:25:33.793 "compare_and_write": false, 00:25:33.793 "abort": true, 00:25:33.793 "seek_hole": false, 00:25:33.793 "seek_data": false, 00:25:33.793 "copy": true, 00:25:33.793 "nvme_iov_md": false 00:25:33.793 }, 00:25:33.793 "memory_domains": [ 00:25:33.793 { 00:25:33.793 "dma_device_id": "system", 00:25:33.793 "dma_device_type": 1 00:25:33.793 }, 00:25:33.793 { 00:25:33.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.793 "dma_device_type": 2 00:25:33.793 } 00:25:33.793 ], 00:25:33.793 "driver_specific": {} 00:25:33.793 } 00:25:33.793 ] 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.793 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.051 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.051 "name": "Existed_Raid", 00:25:34.051 "uuid": "46be3ba9-391f-47e3-9600-16ca236b4f9d", 00:25:34.051 "strip_size_kb": 0, 00:25:34.051 "state": "online", 00:25:34.051 "raid_level": "raid1", 00:25:34.051 "superblock": false, 00:25:34.051 "num_base_bdevs": 4, 00:25:34.051 "num_base_bdevs_discovered": 4, 00:25:34.051 "num_base_bdevs_operational": 4, 00:25:34.051 "base_bdevs_list": [ 00:25:34.051 { 00:25:34.051 "name": "BaseBdev1", 00:25:34.051 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:34.051 "is_configured": true, 00:25:34.051 "data_offset": 0, 00:25:34.051 "data_size": 65536 00:25:34.051 }, 00:25:34.051 { 00:25:34.051 "name": "BaseBdev2", 00:25:34.051 "uuid": "4df99c4e-42d2-42c3-9bc7-01ec6182be4e", 00:25:34.051 "is_configured": true, 00:25:34.051 "data_offset": 0, 00:25:34.051 "data_size": 65536 00:25:34.051 }, 00:25:34.051 { 00:25:34.051 "name": "BaseBdev3", 00:25:34.051 "uuid": "7b85c6b6-7d15-4feb-af3f-ea22ea07a237", 00:25:34.051 "is_configured": true, 00:25:34.051 "data_offset": 0, 00:25:34.051 "data_size": 65536 00:25:34.051 }, 00:25:34.051 { 00:25:34.051 "name": "BaseBdev4", 00:25:34.051 "uuid": "7b71ee97-1b3b-45de-97af-fd987ee8bb08", 00:25:34.051 "is_configured": true, 00:25:34.051 "data_offset": 0, 00:25:34.051 "data_size": 65536 00:25:34.051 } 00:25:34.051 ] 00:25:34.051 }' 00:25:34.051 18:52:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.051 18:52:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:34.618 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:34.876 [2024-07-25 18:52:35.277993] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.877 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:34.877 "name": "Existed_Raid", 00:25:34.877 "aliases": [ 00:25:34.877 "46be3ba9-391f-47e3-9600-16ca236b4f9d" 00:25:34.877 ], 00:25:34.877 "product_name": "Raid Volume", 00:25:34.877 "block_size": 512, 00:25:34.877 "num_blocks": 65536, 00:25:34.877 "uuid": "46be3ba9-391f-47e3-9600-16ca236b4f9d", 00:25:34.877 "assigned_rate_limits": { 00:25:34.877 "rw_ios_per_sec": 0, 00:25:34.877 "rw_mbytes_per_sec": 0, 00:25:34.877 "r_mbytes_per_sec": 0, 00:25:34.877 "w_mbytes_per_sec": 0 00:25:34.877 }, 00:25:34.877 "claimed": false, 00:25:34.877 "zoned": false, 00:25:34.877 "supported_io_types": { 00:25:34.877 "read": true, 00:25:34.877 "write": true, 00:25:34.877 "unmap": false, 00:25:34.877 "flush": false, 00:25:34.877 "reset": true, 00:25:34.877 "nvme_admin": false, 00:25:34.877 "nvme_io": false, 00:25:34.877 "nvme_io_md": false, 00:25:34.877 "write_zeroes": true, 00:25:34.877 "zcopy": false, 00:25:34.877 "get_zone_info": false, 00:25:34.877 "zone_management": false, 00:25:34.877 "zone_append": false, 00:25:34.877 "compare": false, 00:25:34.877 "compare_and_write": false, 00:25:34.877 "abort": false, 00:25:34.877 "seek_hole": false, 00:25:34.877 "seek_data": false, 00:25:34.877 "copy": false, 00:25:34.877 "nvme_iov_md": false 00:25:34.877 }, 00:25:34.877 "memory_domains": [ 00:25:34.877 { 00:25:34.877 "dma_device_id": "system", 00:25:34.877 "dma_device_type": 1 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.877 "dma_device_type": 2 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "dma_device_id": "system", 00:25:34.877 "dma_device_type": 1 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.877 "dma_device_type": 2 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "dma_device_id": "system", 00:25:34.877 "dma_device_type": 1 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.877 "dma_device_type": 2 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "dma_device_id": "system", 00:25:34.877 "dma_device_type": 1 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.877 "dma_device_type": 2 00:25:34.877 } 00:25:34.877 ], 00:25:34.877 "driver_specific": { 00:25:34.877 "raid": { 00:25:34.877 "uuid": "46be3ba9-391f-47e3-9600-16ca236b4f9d", 00:25:34.877 "strip_size_kb": 0, 00:25:34.877 "state": "online", 00:25:34.877 "raid_level": "raid1", 00:25:34.877 "superblock": false, 00:25:34.877 "num_base_bdevs": 4, 00:25:34.877 "num_base_bdevs_discovered": 4, 00:25:34.877 "num_base_bdevs_operational": 4, 00:25:34.877 "base_bdevs_list": [ 00:25:34.877 { 00:25:34.877 "name": "BaseBdev1", 00:25:34.877 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:34.877 "is_configured": true, 00:25:34.877 "data_offset": 0, 00:25:34.877 "data_size": 65536 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "name": "BaseBdev2", 00:25:34.877 "uuid": "4df99c4e-42d2-42c3-9bc7-01ec6182be4e", 00:25:34.877 "is_configured": true, 00:25:34.877 "data_offset": 0, 00:25:34.877 "data_size": 65536 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "name": "BaseBdev3", 00:25:34.877 "uuid": "7b85c6b6-7d15-4feb-af3f-ea22ea07a237", 00:25:34.877 "is_configured": true, 00:25:34.877 "data_offset": 0, 00:25:34.877 "data_size": 65536 00:25:34.877 }, 00:25:34.877 { 00:25:34.877 "name": "BaseBdev4", 00:25:34.877 "uuid": "7b71ee97-1b3b-45de-97af-fd987ee8bb08", 00:25:34.877 "is_configured": true, 00:25:34.877 "data_offset": 0, 00:25:34.877 "data_size": 65536 00:25:34.877 } 00:25:34.877 ] 00:25:34.877 } 00:25:34.877 } 00:25:34.877 }' 00:25:34.877 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:34.877 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:34.877 BaseBdev2 00:25:34.877 BaseBdev3 00:25:34.877 BaseBdev4' 00:25:34.877 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:34.877 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:34.877 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:35.136 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:35.136 "name": "BaseBdev1", 00:25:35.136 "aliases": [ 00:25:35.136 "fa9be75c-5d7f-4bdb-90d2-a31727abb591" 00:25:35.136 ], 00:25:35.136 "product_name": "Malloc disk", 00:25:35.136 "block_size": 512, 00:25:35.136 "num_blocks": 65536, 00:25:35.136 "uuid": "fa9be75c-5d7f-4bdb-90d2-a31727abb591", 00:25:35.136 "assigned_rate_limits": { 00:25:35.136 "rw_ios_per_sec": 0, 00:25:35.136 "rw_mbytes_per_sec": 0, 00:25:35.136 "r_mbytes_per_sec": 0, 00:25:35.136 "w_mbytes_per_sec": 0 00:25:35.136 }, 00:25:35.136 "claimed": true, 00:25:35.136 "claim_type": "exclusive_write", 00:25:35.136 "zoned": false, 00:25:35.136 "supported_io_types": { 00:25:35.136 "read": true, 00:25:35.136 "write": true, 00:25:35.136 "unmap": true, 00:25:35.136 "flush": true, 00:25:35.136 "reset": true, 00:25:35.136 "nvme_admin": false, 00:25:35.136 "nvme_io": false, 00:25:35.136 "nvme_io_md": false, 00:25:35.136 "write_zeroes": true, 00:25:35.136 "zcopy": true, 00:25:35.136 "get_zone_info": false, 00:25:35.136 "zone_management": false, 00:25:35.136 "zone_append": false, 00:25:35.136 "compare": false, 00:25:35.136 "compare_and_write": false, 00:25:35.136 "abort": true, 00:25:35.136 "seek_hole": false, 00:25:35.136 "seek_data": false, 00:25:35.136 "copy": true, 00:25:35.136 "nvme_iov_md": false 00:25:35.136 }, 00:25:35.136 "memory_domains": [ 00:25:35.136 { 00:25:35.136 "dma_device_id": "system", 00:25:35.136 "dma_device_type": 1 00:25:35.136 }, 00:25:35.136 { 00:25:35.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.136 "dma_device_type": 2 00:25:35.136 } 00:25:35.136 ], 00:25:35.136 "driver_specific": {} 00:25:35.136 }' 00:25:35.136 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.136 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.136 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:35.136 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.136 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:35.394 18:52:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:35.652 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:35.652 "name": "BaseBdev2", 00:25:35.652 "aliases": [ 00:25:35.652 "4df99c4e-42d2-42c3-9bc7-01ec6182be4e" 00:25:35.652 ], 00:25:35.652 "product_name": "Malloc disk", 00:25:35.652 "block_size": 512, 00:25:35.652 "num_blocks": 65536, 00:25:35.652 "uuid": "4df99c4e-42d2-42c3-9bc7-01ec6182be4e", 00:25:35.652 "assigned_rate_limits": { 00:25:35.652 "rw_ios_per_sec": 0, 00:25:35.652 "rw_mbytes_per_sec": 0, 00:25:35.652 "r_mbytes_per_sec": 0, 00:25:35.652 "w_mbytes_per_sec": 0 00:25:35.652 }, 00:25:35.652 "claimed": true, 00:25:35.652 "claim_type": "exclusive_write", 00:25:35.652 "zoned": false, 00:25:35.652 "supported_io_types": { 00:25:35.652 "read": true, 00:25:35.652 "write": true, 00:25:35.652 "unmap": true, 00:25:35.652 "flush": true, 00:25:35.652 "reset": true, 00:25:35.652 "nvme_admin": false, 00:25:35.652 "nvme_io": false, 00:25:35.652 "nvme_io_md": false, 00:25:35.652 "write_zeroes": true, 00:25:35.652 "zcopy": true, 00:25:35.652 "get_zone_info": false, 00:25:35.652 "zone_management": false, 00:25:35.652 "zone_append": false, 00:25:35.652 "compare": false, 00:25:35.652 "compare_and_write": false, 00:25:35.652 "abort": true, 00:25:35.652 "seek_hole": false, 00:25:35.652 "seek_data": false, 00:25:35.652 "copy": true, 00:25:35.652 "nvme_iov_md": false 00:25:35.652 }, 00:25:35.652 "memory_domains": [ 00:25:35.652 { 00:25:35.652 "dma_device_id": "system", 00:25:35.652 "dma_device_type": 1 00:25:35.652 }, 00:25:35.652 { 00:25:35.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.652 "dma_device_type": 2 00:25:35.652 } 00:25:35.652 ], 00:25:35.652 "driver_specific": {} 00:25:35.652 }' 00:25:35.652 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.910 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.168 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.168 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:36.168 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:36.168 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:36.168 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:36.426 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:36.426 "name": "BaseBdev3", 00:25:36.426 "aliases": [ 00:25:36.426 "7b85c6b6-7d15-4feb-af3f-ea22ea07a237" 00:25:36.426 ], 00:25:36.426 "product_name": "Malloc disk", 00:25:36.426 "block_size": 512, 00:25:36.426 "num_blocks": 65536, 00:25:36.426 "uuid": "7b85c6b6-7d15-4feb-af3f-ea22ea07a237", 00:25:36.426 "assigned_rate_limits": { 00:25:36.426 "rw_ios_per_sec": 0, 00:25:36.426 "rw_mbytes_per_sec": 0, 00:25:36.427 "r_mbytes_per_sec": 0, 00:25:36.427 "w_mbytes_per_sec": 0 00:25:36.427 }, 00:25:36.427 "claimed": true, 00:25:36.427 "claim_type": "exclusive_write", 00:25:36.427 "zoned": false, 00:25:36.427 "supported_io_types": { 00:25:36.427 "read": true, 00:25:36.427 "write": true, 00:25:36.427 "unmap": true, 00:25:36.427 "flush": true, 00:25:36.427 "reset": true, 00:25:36.427 "nvme_admin": false, 00:25:36.427 "nvme_io": false, 00:25:36.427 "nvme_io_md": false, 00:25:36.427 "write_zeroes": true, 00:25:36.427 "zcopy": true, 00:25:36.427 "get_zone_info": false, 00:25:36.427 "zone_management": false, 00:25:36.427 "zone_append": false, 00:25:36.427 "compare": false, 00:25:36.427 "compare_and_write": false, 00:25:36.427 "abort": true, 00:25:36.427 "seek_hole": false, 00:25:36.427 "seek_data": false, 00:25:36.427 "copy": true, 00:25:36.427 "nvme_iov_md": false 00:25:36.427 }, 00:25:36.427 "memory_domains": [ 00:25:36.427 { 00:25:36.427 "dma_device_id": "system", 00:25:36.427 "dma_device_type": 1 00:25:36.427 }, 00:25:36.427 { 00:25:36.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.427 "dma_device_type": 2 00:25:36.427 } 00:25:36.427 ], 00:25:36.427 "driver_specific": {} 00:25:36.427 }' 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.427 18:52:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.685 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:36.685 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.685 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.685 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:36.685 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:36.685 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:36.685 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:36.944 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:36.944 "name": "BaseBdev4", 00:25:36.944 "aliases": [ 00:25:36.944 "7b71ee97-1b3b-45de-97af-fd987ee8bb08" 00:25:36.944 ], 00:25:36.944 "product_name": "Malloc disk", 00:25:36.944 "block_size": 512, 00:25:36.944 "num_blocks": 65536, 00:25:36.944 "uuid": "7b71ee97-1b3b-45de-97af-fd987ee8bb08", 00:25:36.944 "assigned_rate_limits": { 00:25:36.944 "rw_ios_per_sec": 0, 00:25:36.944 "rw_mbytes_per_sec": 0, 00:25:36.944 "r_mbytes_per_sec": 0, 00:25:36.944 "w_mbytes_per_sec": 0 00:25:36.944 }, 00:25:36.944 "claimed": true, 00:25:36.944 "claim_type": "exclusive_write", 00:25:36.944 "zoned": false, 00:25:36.944 "supported_io_types": { 00:25:36.944 "read": true, 00:25:36.944 "write": true, 00:25:36.944 "unmap": true, 00:25:36.944 "flush": true, 00:25:36.944 "reset": true, 00:25:36.944 "nvme_admin": false, 00:25:36.944 "nvme_io": false, 00:25:36.944 "nvme_io_md": false, 00:25:36.944 "write_zeroes": true, 00:25:36.944 "zcopy": true, 00:25:36.944 "get_zone_info": false, 00:25:36.944 "zone_management": false, 00:25:36.944 "zone_append": false, 00:25:36.944 "compare": false, 00:25:36.944 "compare_and_write": false, 00:25:36.944 "abort": true, 00:25:36.944 "seek_hole": false, 00:25:36.944 "seek_data": false, 00:25:36.944 "copy": true, 00:25:36.944 "nvme_iov_md": false 00:25:36.944 }, 00:25:36.944 "memory_domains": [ 00:25:36.944 { 00:25:36.944 "dma_device_id": "system", 00:25:36.944 "dma_device_type": 1 00:25:36.944 }, 00:25:36.944 { 00:25:36.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.944 "dma_device_type": 2 00:25:36.944 } 00:25:36.944 ], 00:25:36.944 "driver_specific": {} 00:25:36.944 }' 00:25:36.944 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.944 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.944 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:36.944 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.944 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:37.202 18:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:37.459 [2024-07-25 18:52:37.998208] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.718 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.977 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.977 "name": "Existed_Raid", 00:25:37.977 "uuid": "46be3ba9-391f-47e3-9600-16ca236b4f9d", 00:25:37.977 "strip_size_kb": 0, 00:25:37.977 "state": "online", 00:25:37.977 "raid_level": "raid1", 00:25:37.977 "superblock": false, 00:25:37.977 "num_base_bdevs": 4, 00:25:37.977 "num_base_bdevs_discovered": 3, 00:25:37.977 "num_base_bdevs_operational": 3, 00:25:37.977 "base_bdevs_list": [ 00:25:37.977 { 00:25:37.977 "name": null, 00:25:37.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.977 "is_configured": false, 00:25:37.977 "data_offset": 0, 00:25:37.977 "data_size": 65536 00:25:37.977 }, 00:25:37.977 { 00:25:37.977 "name": "BaseBdev2", 00:25:37.977 "uuid": "4df99c4e-42d2-42c3-9bc7-01ec6182be4e", 00:25:37.977 "is_configured": true, 00:25:37.977 "data_offset": 0, 00:25:37.977 "data_size": 65536 00:25:37.977 }, 00:25:37.977 { 00:25:37.977 "name": "BaseBdev3", 00:25:37.977 "uuid": "7b85c6b6-7d15-4feb-af3f-ea22ea07a237", 00:25:37.977 "is_configured": true, 00:25:37.977 "data_offset": 0, 00:25:37.977 "data_size": 65536 00:25:37.977 }, 00:25:37.977 { 00:25:37.977 "name": "BaseBdev4", 00:25:37.977 "uuid": "7b71ee97-1b3b-45de-97af-fd987ee8bb08", 00:25:37.977 "is_configured": true, 00:25:37.977 "data_offset": 0, 00:25:37.977 "data_size": 65536 00:25:37.977 } 00:25:37.977 ] 00:25:37.977 }' 00:25:37.977 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.977 18:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.543 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:38.543 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:38.543 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.543 18:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:38.801 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:38.801 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:38.801 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:38.801 [2024-07-25 18:52:39.298462] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:39.062 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:39.063 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:39.063 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.063 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:39.063 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:39.063 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:39.063 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:39.325 [2024-07-25 18:52:39.764981] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:39.325 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:39.325 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:39.325 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.325 18:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:39.890 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:39.890 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:39.890 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:39.890 [2024-07-25 18:52:40.389849] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:39.890 [2024-07-25 18:52:40.390181] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:40.148 [2024-07-25 18:52:40.475744] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:40.148 [2024-07-25 18:52:40.476041] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:40.148 [2024-07-25 18:52:40.476188] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:40.148 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:40.405 BaseBdev2 00:25:40.405 18:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:40.405 18:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:40.405 18:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:40.405 18:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:40.405 18:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:40.405 18:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:40.405 18:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.662 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:40.920 [ 00:25:40.920 { 00:25:40.920 "name": "BaseBdev2", 00:25:40.920 "aliases": [ 00:25:40.920 "6e024e59-d07a-423a-837d-df3e9d4dc144" 00:25:40.920 ], 00:25:40.920 "product_name": "Malloc disk", 00:25:40.920 "block_size": 512, 00:25:40.920 "num_blocks": 65536, 00:25:40.920 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:40.920 "assigned_rate_limits": { 00:25:40.920 "rw_ios_per_sec": 0, 00:25:40.920 "rw_mbytes_per_sec": 0, 00:25:40.920 "r_mbytes_per_sec": 0, 00:25:40.920 "w_mbytes_per_sec": 0 00:25:40.920 }, 00:25:40.920 "claimed": false, 00:25:40.920 "zoned": false, 00:25:40.920 "supported_io_types": { 00:25:40.920 "read": true, 00:25:40.920 "write": true, 00:25:40.920 "unmap": true, 00:25:40.920 "flush": true, 00:25:40.920 "reset": true, 00:25:40.920 "nvme_admin": false, 00:25:40.920 "nvme_io": false, 00:25:40.920 "nvme_io_md": false, 00:25:40.920 "write_zeroes": true, 00:25:40.920 "zcopy": true, 00:25:40.920 "get_zone_info": false, 00:25:40.920 "zone_management": false, 00:25:40.920 "zone_append": false, 00:25:40.920 "compare": false, 00:25:40.920 "compare_and_write": false, 00:25:40.920 "abort": true, 00:25:40.920 "seek_hole": false, 00:25:40.920 "seek_data": false, 00:25:40.920 "copy": true, 00:25:40.920 "nvme_iov_md": false 00:25:40.920 }, 00:25:40.920 "memory_domains": [ 00:25:40.920 { 00:25:40.920 "dma_device_id": "system", 00:25:40.920 "dma_device_type": 1 00:25:40.920 }, 00:25:40.920 { 00:25:40.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.920 "dma_device_type": 2 00:25:40.920 } 00:25:40.920 ], 00:25:40.920 "driver_specific": {} 00:25:40.920 } 00:25:40.920 ] 00:25:40.920 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:40.920 18:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:40.920 18:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:40.920 18:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:41.178 BaseBdev3 00:25:41.178 18:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:41.178 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:41.178 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:41.178 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:41.178 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:41.178 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:41.178 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:41.435 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:41.435 [ 00:25:41.435 { 00:25:41.435 "name": "BaseBdev3", 00:25:41.435 "aliases": [ 00:25:41.435 "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2" 00:25:41.435 ], 00:25:41.435 "product_name": "Malloc disk", 00:25:41.435 "block_size": 512, 00:25:41.435 "num_blocks": 65536, 00:25:41.435 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:41.436 "assigned_rate_limits": { 00:25:41.436 "rw_ios_per_sec": 0, 00:25:41.436 "rw_mbytes_per_sec": 0, 00:25:41.436 "r_mbytes_per_sec": 0, 00:25:41.436 "w_mbytes_per_sec": 0 00:25:41.436 }, 00:25:41.436 "claimed": false, 00:25:41.436 "zoned": false, 00:25:41.436 "supported_io_types": { 00:25:41.436 "read": true, 00:25:41.436 "write": true, 00:25:41.436 "unmap": true, 00:25:41.436 "flush": true, 00:25:41.436 "reset": true, 00:25:41.436 "nvme_admin": false, 00:25:41.436 "nvme_io": false, 00:25:41.436 "nvme_io_md": false, 00:25:41.436 "write_zeroes": true, 00:25:41.436 "zcopy": true, 00:25:41.436 "get_zone_info": false, 00:25:41.436 "zone_management": false, 00:25:41.436 "zone_append": false, 00:25:41.436 "compare": false, 00:25:41.436 "compare_and_write": false, 00:25:41.436 "abort": true, 00:25:41.436 "seek_hole": false, 00:25:41.436 "seek_data": false, 00:25:41.436 "copy": true, 00:25:41.436 "nvme_iov_md": false 00:25:41.436 }, 00:25:41.436 "memory_domains": [ 00:25:41.436 { 00:25:41.436 "dma_device_id": "system", 00:25:41.436 "dma_device_type": 1 00:25:41.436 }, 00:25:41.436 { 00:25:41.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.436 "dma_device_type": 2 00:25:41.436 } 00:25:41.436 ], 00:25:41.436 "driver_specific": {} 00:25:41.436 } 00:25:41.436 ] 00:25:41.436 18:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:41.436 18:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:41.436 18:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:41.436 18:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:41.693 BaseBdev4 00:25:41.693 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:41.693 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:41.693 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:41.693 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:41.693 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:41.693 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:41.693 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:41.951 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:42.209 [ 00:25:42.209 { 00:25:42.209 "name": "BaseBdev4", 00:25:42.209 "aliases": [ 00:25:42.209 "040b9b65-7f47-4dc8-9c6d-7df6d8613246" 00:25:42.209 ], 00:25:42.209 "product_name": "Malloc disk", 00:25:42.209 "block_size": 512, 00:25:42.209 "num_blocks": 65536, 00:25:42.209 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:42.209 "assigned_rate_limits": { 00:25:42.209 "rw_ios_per_sec": 0, 00:25:42.209 "rw_mbytes_per_sec": 0, 00:25:42.209 "r_mbytes_per_sec": 0, 00:25:42.209 "w_mbytes_per_sec": 0 00:25:42.209 }, 00:25:42.209 "claimed": false, 00:25:42.209 "zoned": false, 00:25:42.209 "supported_io_types": { 00:25:42.209 "read": true, 00:25:42.209 "write": true, 00:25:42.209 "unmap": true, 00:25:42.209 "flush": true, 00:25:42.209 "reset": true, 00:25:42.209 "nvme_admin": false, 00:25:42.209 "nvme_io": false, 00:25:42.209 "nvme_io_md": false, 00:25:42.209 "write_zeroes": true, 00:25:42.209 "zcopy": true, 00:25:42.209 "get_zone_info": false, 00:25:42.209 "zone_management": false, 00:25:42.209 "zone_append": false, 00:25:42.209 "compare": false, 00:25:42.209 "compare_and_write": false, 00:25:42.209 "abort": true, 00:25:42.209 "seek_hole": false, 00:25:42.209 "seek_data": false, 00:25:42.209 "copy": true, 00:25:42.209 "nvme_iov_md": false 00:25:42.209 }, 00:25:42.209 "memory_domains": [ 00:25:42.209 { 00:25:42.209 "dma_device_id": "system", 00:25:42.209 "dma_device_type": 1 00:25:42.209 }, 00:25:42.209 { 00:25:42.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.209 "dma_device_type": 2 00:25:42.209 } 00:25:42.209 ], 00:25:42.209 "driver_specific": {} 00:25:42.209 } 00:25:42.209 ] 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:42.209 [2024-07-25 18:52:42.744576] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:42.209 [2024-07-25 18:52:42.744847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:42.209 [2024-07-25 18:52:42.744958] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:42.209 [2024-07-25 18:52:42.747101] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:42.209 [2024-07-25 18:52:42.747275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.209 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.467 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.467 "name": "Existed_Raid", 00:25:42.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.467 "strip_size_kb": 0, 00:25:42.467 "state": "configuring", 00:25:42.467 "raid_level": "raid1", 00:25:42.467 "superblock": false, 00:25:42.467 "num_base_bdevs": 4, 00:25:42.467 "num_base_bdevs_discovered": 3, 00:25:42.467 "num_base_bdevs_operational": 4, 00:25:42.467 "base_bdevs_list": [ 00:25:42.467 { 00:25:42.467 "name": "BaseBdev1", 00:25:42.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.467 "is_configured": false, 00:25:42.467 "data_offset": 0, 00:25:42.467 "data_size": 0 00:25:42.467 }, 00:25:42.467 { 00:25:42.467 "name": "BaseBdev2", 00:25:42.467 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:42.467 "is_configured": true, 00:25:42.467 "data_offset": 0, 00:25:42.467 "data_size": 65536 00:25:42.467 }, 00:25:42.467 { 00:25:42.467 "name": "BaseBdev3", 00:25:42.467 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:42.467 "is_configured": true, 00:25:42.467 "data_offset": 0, 00:25:42.467 "data_size": 65536 00:25:42.467 }, 00:25:42.467 { 00:25:42.467 "name": "BaseBdev4", 00:25:42.467 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:42.467 "is_configured": true, 00:25:42.467 "data_offset": 0, 00:25:42.467 "data_size": 65536 00:25:42.467 } 00:25:42.467 ] 00:25:42.467 }' 00:25:42.467 18:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.467 18:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.040 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:43.300 [2024-07-25 18:52:43.768735] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.300 18:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.558 18:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.558 "name": "Existed_Raid", 00:25:43.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.558 "strip_size_kb": 0, 00:25:43.558 "state": "configuring", 00:25:43.558 "raid_level": "raid1", 00:25:43.558 "superblock": false, 00:25:43.558 "num_base_bdevs": 4, 00:25:43.558 "num_base_bdevs_discovered": 2, 00:25:43.558 "num_base_bdevs_operational": 4, 00:25:43.558 "base_bdevs_list": [ 00:25:43.558 { 00:25:43.558 "name": "BaseBdev1", 00:25:43.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.558 "is_configured": false, 00:25:43.558 "data_offset": 0, 00:25:43.558 "data_size": 0 00:25:43.558 }, 00:25:43.558 { 00:25:43.558 "name": null, 00:25:43.558 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:43.558 "is_configured": false, 00:25:43.558 "data_offset": 0, 00:25:43.558 "data_size": 65536 00:25:43.558 }, 00:25:43.558 { 00:25:43.558 "name": "BaseBdev3", 00:25:43.558 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:43.558 "is_configured": true, 00:25:43.558 "data_offset": 0, 00:25:43.558 "data_size": 65536 00:25:43.558 }, 00:25:43.558 { 00:25:43.558 "name": "BaseBdev4", 00:25:43.558 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:43.558 "is_configured": true, 00:25:43.558 "data_offset": 0, 00:25:43.558 "data_size": 65536 00:25:43.558 } 00:25:43.558 ] 00:25:43.558 }' 00:25:43.558 18:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.558 18:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.124 18:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.124 18:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:44.382 18:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:44.382 18:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:44.640 [2024-07-25 18:52:45.054093] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:44.640 BaseBdev1 00:25:44.640 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:44.640 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:44.640 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:44.640 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:44.640 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:44.640 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:44.640 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:44.898 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:45.157 [ 00:25:45.157 { 00:25:45.157 "name": "BaseBdev1", 00:25:45.157 "aliases": [ 00:25:45.157 "20842975-bcce-4b52-b7b7-d05091a77669" 00:25:45.157 ], 00:25:45.157 "product_name": "Malloc disk", 00:25:45.157 "block_size": 512, 00:25:45.157 "num_blocks": 65536, 00:25:45.157 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:45.157 "assigned_rate_limits": { 00:25:45.157 "rw_ios_per_sec": 0, 00:25:45.157 "rw_mbytes_per_sec": 0, 00:25:45.157 "r_mbytes_per_sec": 0, 00:25:45.157 "w_mbytes_per_sec": 0 00:25:45.157 }, 00:25:45.157 "claimed": true, 00:25:45.157 "claim_type": "exclusive_write", 00:25:45.157 "zoned": false, 00:25:45.157 "supported_io_types": { 00:25:45.157 "read": true, 00:25:45.157 "write": true, 00:25:45.157 "unmap": true, 00:25:45.157 "flush": true, 00:25:45.157 "reset": true, 00:25:45.157 "nvme_admin": false, 00:25:45.157 "nvme_io": false, 00:25:45.157 "nvme_io_md": false, 00:25:45.157 "write_zeroes": true, 00:25:45.157 "zcopy": true, 00:25:45.157 "get_zone_info": false, 00:25:45.157 "zone_management": false, 00:25:45.157 "zone_append": false, 00:25:45.157 "compare": false, 00:25:45.157 "compare_and_write": false, 00:25:45.157 "abort": true, 00:25:45.157 "seek_hole": false, 00:25:45.157 "seek_data": false, 00:25:45.157 "copy": true, 00:25:45.157 "nvme_iov_md": false 00:25:45.157 }, 00:25:45.157 "memory_domains": [ 00:25:45.157 { 00:25:45.157 "dma_device_id": "system", 00:25:45.157 "dma_device_type": 1 00:25:45.157 }, 00:25:45.157 { 00:25:45.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.157 "dma_device_type": 2 00:25:45.157 } 00:25:45.157 ], 00:25:45.157 "driver_specific": {} 00:25:45.157 } 00:25:45.157 ] 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.157 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.415 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.415 "name": "Existed_Raid", 00:25:45.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.415 "strip_size_kb": 0, 00:25:45.415 "state": "configuring", 00:25:45.415 "raid_level": "raid1", 00:25:45.415 "superblock": false, 00:25:45.415 "num_base_bdevs": 4, 00:25:45.415 "num_base_bdevs_discovered": 3, 00:25:45.415 "num_base_bdevs_operational": 4, 00:25:45.415 "base_bdevs_list": [ 00:25:45.416 { 00:25:45.416 "name": "BaseBdev1", 00:25:45.416 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:45.416 "is_configured": true, 00:25:45.416 "data_offset": 0, 00:25:45.416 "data_size": 65536 00:25:45.416 }, 00:25:45.416 { 00:25:45.416 "name": null, 00:25:45.416 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:45.416 "is_configured": false, 00:25:45.416 "data_offset": 0, 00:25:45.416 "data_size": 65536 00:25:45.416 }, 00:25:45.416 { 00:25:45.416 "name": "BaseBdev3", 00:25:45.416 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:45.416 "is_configured": true, 00:25:45.416 "data_offset": 0, 00:25:45.416 "data_size": 65536 00:25:45.416 }, 00:25:45.416 { 00:25:45.416 "name": "BaseBdev4", 00:25:45.416 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:45.416 "is_configured": true, 00:25:45.416 "data_offset": 0, 00:25:45.416 "data_size": 65536 00:25:45.416 } 00:25:45.416 ] 00:25:45.416 }' 00:25:45.416 18:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.416 18:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.983 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.983 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:45.983 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:45.983 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:46.241 [2024-07-25 18:52:46.763120] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.241 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.499 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.499 "name": "Existed_Raid", 00:25:46.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.499 "strip_size_kb": 0, 00:25:46.499 "state": "configuring", 00:25:46.499 "raid_level": "raid1", 00:25:46.499 "superblock": false, 00:25:46.499 "num_base_bdevs": 4, 00:25:46.499 "num_base_bdevs_discovered": 2, 00:25:46.499 "num_base_bdevs_operational": 4, 00:25:46.499 "base_bdevs_list": [ 00:25:46.499 { 00:25:46.499 "name": "BaseBdev1", 00:25:46.499 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:46.499 "is_configured": true, 00:25:46.499 "data_offset": 0, 00:25:46.499 "data_size": 65536 00:25:46.499 }, 00:25:46.499 { 00:25:46.499 "name": null, 00:25:46.499 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:46.499 "is_configured": false, 00:25:46.499 "data_offset": 0, 00:25:46.499 "data_size": 65536 00:25:46.499 }, 00:25:46.499 { 00:25:46.499 "name": null, 00:25:46.499 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:46.499 "is_configured": false, 00:25:46.499 "data_offset": 0, 00:25:46.499 "data_size": 65536 00:25:46.499 }, 00:25:46.499 { 00:25:46.500 "name": "BaseBdev4", 00:25:46.500 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:46.500 "is_configured": true, 00:25:46.500 "data_offset": 0, 00:25:46.500 "data_size": 65536 00:25:46.500 } 00:25:46.500 ] 00:25:46.500 }' 00:25:46.500 18:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.500 18:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.066 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.066 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:47.324 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:47.324 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:47.583 [2024-07-25 18:52:47.959388] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.583 18:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.841 18:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:47.841 "name": "Existed_Raid", 00:25:47.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.841 "strip_size_kb": 0, 00:25:47.841 "state": "configuring", 00:25:47.841 "raid_level": "raid1", 00:25:47.841 "superblock": false, 00:25:47.841 "num_base_bdevs": 4, 00:25:47.841 "num_base_bdevs_discovered": 3, 00:25:47.841 "num_base_bdevs_operational": 4, 00:25:47.841 "base_bdevs_list": [ 00:25:47.841 { 00:25:47.841 "name": "BaseBdev1", 00:25:47.841 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:47.841 "is_configured": true, 00:25:47.841 "data_offset": 0, 00:25:47.841 "data_size": 65536 00:25:47.841 }, 00:25:47.841 { 00:25:47.841 "name": null, 00:25:47.841 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:47.841 "is_configured": false, 00:25:47.841 "data_offset": 0, 00:25:47.841 "data_size": 65536 00:25:47.841 }, 00:25:47.841 { 00:25:47.841 "name": "BaseBdev3", 00:25:47.841 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:47.841 "is_configured": true, 00:25:47.841 "data_offset": 0, 00:25:47.841 "data_size": 65536 00:25:47.841 }, 00:25:47.841 { 00:25:47.841 "name": "BaseBdev4", 00:25:47.841 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:47.841 "is_configured": true, 00:25:47.841 "data_offset": 0, 00:25:47.841 "data_size": 65536 00:25:47.841 } 00:25:47.841 ] 00:25:47.841 }' 00:25:47.841 18:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:47.841 18:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.408 18:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.408 18:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:48.666 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:48.666 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:48.924 [2024-07-25 18:52:49.298169] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.924 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.183 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.183 "name": "Existed_Raid", 00:25:49.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.183 "strip_size_kb": 0, 00:25:49.183 "state": "configuring", 00:25:49.183 "raid_level": "raid1", 00:25:49.183 "superblock": false, 00:25:49.183 "num_base_bdevs": 4, 00:25:49.183 "num_base_bdevs_discovered": 2, 00:25:49.183 "num_base_bdevs_operational": 4, 00:25:49.183 "base_bdevs_list": [ 00:25:49.183 { 00:25:49.183 "name": null, 00:25:49.183 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:49.183 "is_configured": false, 00:25:49.183 "data_offset": 0, 00:25:49.183 "data_size": 65536 00:25:49.183 }, 00:25:49.183 { 00:25:49.183 "name": null, 00:25:49.183 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:49.183 "is_configured": false, 00:25:49.183 "data_offset": 0, 00:25:49.183 "data_size": 65536 00:25:49.183 }, 00:25:49.183 { 00:25:49.183 "name": "BaseBdev3", 00:25:49.183 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:49.183 "is_configured": true, 00:25:49.183 "data_offset": 0, 00:25:49.183 "data_size": 65536 00:25:49.183 }, 00:25:49.183 { 00:25:49.183 "name": "BaseBdev4", 00:25:49.183 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:49.183 "is_configured": true, 00:25:49.183 "data_offset": 0, 00:25:49.183 "data_size": 65536 00:25:49.183 } 00:25:49.183 ] 00:25:49.183 }' 00:25:49.183 18:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.183 18:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.779 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.779 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:50.052 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:50.052 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:50.320 [2024-07-25 18:52:50.759566] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.320 18:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.578 18:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.578 "name": "Existed_Raid", 00:25:50.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.578 "strip_size_kb": 0, 00:25:50.578 "state": "configuring", 00:25:50.578 "raid_level": "raid1", 00:25:50.578 "superblock": false, 00:25:50.578 "num_base_bdevs": 4, 00:25:50.578 "num_base_bdevs_discovered": 3, 00:25:50.578 "num_base_bdevs_operational": 4, 00:25:50.578 "base_bdevs_list": [ 00:25:50.578 { 00:25:50.578 "name": null, 00:25:50.578 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:50.578 "is_configured": false, 00:25:50.578 "data_offset": 0, 00:25:50.578 "data_size": 65536 00:25:50.578 }, 00:25:50.578 { 00:25:50.578 "name": "BaseBdev2", 00:25:50.578 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:50.578 "is_configured": true, 00:25:50.578 "data_offset": 0, 00:25:50.578 "data_size": 65536 00:25:50.578 }, 00:25:50.578 { 00:25:50.578 "name": "BaseBdev3", 00:25:50.578 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:50.578 "is_configured": true, 00:25:50.578 "data_offset": 0, 00:25:50.578 "data_size": 65536 00:25:50.578 }, 00:25:50.578 { 00:25:50.578 "name": "BaseBdev4", 00:25:50.578 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:50.578 "is_configured": true, 00:25:50.578 "data_offset": 0, 00:25:50.578 "data_size": 65536 00:25:50.578 } 00:25:50.578 ] 00:25:50.578 }' 00:25:50.578 18:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.578 18:52:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.145 18:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.145 18:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:51.403 18:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:51.403 18:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.403 18:52:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:51.661 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 20842975-bcce-4b52-b7b7-d05091a77669 00:25:51.919 [2024-07-25 18:52:52.309658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:51.919 [2024-07-25 18:52:52.309930] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:25:51.919 [2024-07-25 18:52:52.309974] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:51.920 [2024-07-25 18:52:52.310175] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:51.920 [2024-07-25 18:52:52.310610] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:25:51.920 [2024-07-25 18:52:52.310724] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:25:51.920 [2024-07-25 18:52:52.311041] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.920 NewBaseBdev 00:25:51.920 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:51.920 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:51.920 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:51.920 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:51.920 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:51.920 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:51.920 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:52.178 [ 00:25:52.178 { 00:25:52.178 "name": "NewBaseBdev", 00:25:52.178 "aliases": [ 00:25:52.178 "20842975-bcce-4b52-b7b7-d05091a77669" 00:25:52.178 ], 00:25:52.178 "product_name": "Malloc disk", 00:25:52.178 "block_size": 512, 00:25:52.178 "num_blocks": 65536, 00:25:52.178 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:52.178 "assigned_rate_limits": { 00:25:52.178 "rw_ios_per_sec": 0, 00:25:52.178 "rw_mbytes_per_sec": 0, 00:25:52.178 "r_mbytes_per_sec": 0, 00:25:52.178 "w_mbytes_per_sec": 0 00:25:52.178 }, 00:25:52.178 "claimed": true, 00:25:52.178 "claim_type": "exclusive_write", 00:25:52.178 "zoned": false, 00:25:52.178 "supported_io_types": { 00:25:52.178 "read": true, 00:25:52.178 "write": true, 00:25:52.178 "unmap": true, 00:25:52.178 "flush": true, 00:25:52.178 "reset": true, 00:25:52.178 "nvme_admin": false, 00:25:52.178 "nvme_io": false, 00:25:52.178 "nvme_io_md": false, 00:25:52.178 "write_zeroes": true, 00:25:52.178 "zcopy": true, 00:25:52.178 "get_zone_info": false, 00:25:52.178 "zone_management": false, 00:25:52.178 "zone_append": false, 00:25:52.178 "compare": false, 00:25:52.178 "compare_and_write": false, 00:25:52.178 "abort": true, 00:25:52.178 "seek_hole": false, 00:25:52.178 "seek_data": false, 00:25:52.178 "copy": true, 00:25:52.178 "nvme_iov_md": false 00:25:52.178 }, 00:25:52.178 "memory_domains": [ 00:25:52.178 { 00:25:52.178 "dma_device_id": "system", 00:25:52.178 "dma_device_type": 1 00:25:52.178 }, 00:25:52.178 { 00:25:52.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.178 "dma_device_type": 2 00:25:52.178 } 00:25:52.178 ], 00:25:52.178 "driver_specific": {} 00:25:52.178 } 00:25:52.178 ] 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.178 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.437 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.437 "name": "Existed_Raid", 00:25:52.437 "uuid": "e79ce595-8447-4e96-841b-996068f772f9", 00:25:52.437 "strip_size_kb": 0, 00:25:52.437 "state": "online", 00:25:52.437 "raid_level": "raid1", 00:25:52.437 "superblock": false, 00:25:52.437 "num_base_bdevs": 4, 00:25:52.437 "num_base_bdevs_discovered": 4, 00:25:52.437 "num_base_bdevs_operational": 4, 00:25:52.437 "base_bdevs_list": [ 00:25:52.437 { 00:25:52.437 "name": "NewBaseBdev", 00:25:52.437 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:52.437 "is_configured": true, 00:25:52.437 "data_offset": 0, 00:25:52.437 "data_size": 65536 00:25:52.437 }, 00:25:52.437 { 00:25:52.437 "name": "BaseBdev2", 00:25:52.437 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:52.437 "is_configured": true, 00:25:52.437 "data_offset": 0, 00:25:52.437 "data_size": 65536 00:25:52.437 }, 00:25:52.437 { 00:25:52.437 "name": "BaseBdev3", 00:25:52.437 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:52.437 "is_configured": true, 00:25:52.437 "data_offset": 0, 00:25:52.437 "data_size": 65536 00:25:52.437 }, 00:25:52.437 { 00:25:52.437 "name": "BaseBdev4", 00:25:52.437 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:52.437 "is_configured": true, 00:25:52.437 "data_offset": 0, 00:25:52.437 "data_size": 65536 00:25:52.437 } 00:25:52.437 ] 00:25:52.437 }' 00:25:52.437 18:52:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.437 18:52:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.004 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:53.004 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:53.004 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:53.004 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:53.004 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:53.005 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:53.005 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:53.005 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:53.267 [2024-07-25 18:52:53.650276] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:53.267 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:53.267 "name": "Existed_Raid", 00:25:53.267 "aliases": [ 00:25:53.267 "e79ce595-8447-4e96-841b-996068f772f9" 00:25:53.267 ], 00:25:53.267 "product_name": "Raid Volume", 00:25:53.267 "block_size": 512, 00:25:53.267 "num_blocks": 65536, 00:25:53.267 "uuid": "e79ce595-8447-4e96-841b-996068f772f9", 00:25:53.267 "assigned_rate_limits": { 00:25:53.267 "rw_ios_per_sec": 0, 00:25:53.267 "rw_mbytes_per_sec": 0, 00:25:53.267 "r_mbytes_per_sec": 0, 00:25:53.267 "w_mbytes_per_sec": 0 00:25:53.267 }, 00:25:53.267 "claimed": false, 00:25:53.267 "zoned": false, 00:25:53.267 "supported_io_types": { 00:25:53.267 "read": true, 00:25:53.267 "write": true, 00:25:53.267 "unmap": false, 00:25:53.267 "flush": false, 00:25:53.267 "reset": true, 00:25:53.267 "nvme_admin": false, 00:25:53.267 "nvme_io": false, 00:25:53.267 "nvme_io_md": false, 00:25:53.267 "write_zeroes": true, 00:25:53.267 "zcopy": false, 00:25:53.267 "get_zone_info": false, 00:25:53.267 "zone_management": false, 00:25:53.267 "zone_append": false, 00:25:53.267 "compare": false, 00:25:53.267 "compare_and_write": false, 00:25:53.267 "abort": false, 00:25:53.267 "seek_hole": false, 00:25:53.267 "seek_data": false, 00:25:53.267 "copy": false, 00:25:53.267 "nvme_iov_md": false 00:25:53.267 }, 00:25:53.267 "memory_domains": [ 00:25:53.267 { 00:25:53.267 "dma_device_id": "system", 00:25:53.267 "dma_device_type": 1 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.267 "dma_device_type": 2 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "dma_device_id": "system", 00:25:53.267 "dma_device_type": 1 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.267 "dma_device_type": 2 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "dma_device_id": "system", 00:25:53.267 "dma_device_type": 1 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.267 "dma_device_type": 2 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "dma_device_id": "system", 00:25:53.267 "dma_device_type": 1 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.267 "dma_device_type": 2 00:25:53.267 } 00:25:53.267 ], 00:25:53.267 "driver_specific": { 00:25:53.267 "raid": { 00:25:53.267 "uuid": "e79ce595-8447-4e96-841b-996068f772f9", 00:25:53.267 "strip_size_kb": 0, 00:25:53.267 "state": "online", 00:25:53.267 "raid_level": "raid1", 00:25:53.267 "superblock": false, 00:25:53.267 "num_base_bdevs": 4, 00:25:53.267 "num_base_bdevs_discovered": 4, 00:25:53.267 "num_base_bdevs_operational": 4, 00:25:53.267 "base_bdevs_list": [ 00:25:53.267 { 00:25:53.267 "name": "NewBaseBdev", 00:25:53.267 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "name": "BaseBdev2", 00:25:53.267 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "name": "BaseBdev3", 00:25:53.267 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 }, 00:25:53.267 { 00:25:53.267 "name": "BaseBdev4", 00:25:53.267 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:53.267 "is_configured": true, 00:25:53.267 "data_offset": 0, 00:25:53.267 "data_size": 65536 00:25:53.267 } 00:25:53.268 ] 00:25:53.268 } 00:25:53.268 } 00:25:53.268 }' 00:25:53.268 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:53.268 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:53.268 BaseBdev2 00:25:53.268 BaseBdev3 00:25:53.268 BaseBdev4' 00:25:53.268 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:53.268 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:53.268 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:53.525 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:53.525 "name": "NewBaseBdev", 00:25:53.525 "aliases": [ 00:25:53.525 "20842975-bcce-4b52-b7b7-d05091a77669" 00:25:53.525 ], 00:25:53.525 "product_name": "Malloc disk", 00:25:53.525 "block_size": 512, 00:25:53.525 "num_blocks": 65536, 00:25:53.525 "uuid": "20842975-bcce-4b52-b7b7-d05091a77669", 00:25:53.525 "assigned_rate_limits": { 00:25:53.525 "rw_ios_per_sec": 0, 00:25:53.525 "rw_mbytes_per_sec": 0, 00:25:53.525 "r_mbytes_per_sec": 0, 00:25:53.526 "w_mbytes_per_sec": 0 00:25:53.526 }, 00:25:53.526 "claimed": true, 00:25:53.526 "claim_type": "exclusive_write", 00:25:53.526 "zoned": false, 00:25:53.526 "supported_io_types": { 00:25:53.526 "read": true, 00:25:53.526 "write": true, 00:25:53.526 "unmap": true, 00:25:53.526 "flush": true, 00:25:53.526 "reset": true, 00:25:53.526 "nvme_admin": false, 00:25:53.526 "nvme_io": false, 00:25:53.526 "nvme_io_md": false, 00:25:53.526 "write_zeroes": true, 00:25:53.526 "zcopy": true, 00:25:53.526 "get_zone_info": false, 00:25:53.526 "zone_management": false, 00:25:53.526 "zone_append": false, 00:25:53.526 "compare": false, 00:25:53.526 "compare_and_write": false, 00:25:53.526 "abort": true, 00:25:53.526 "seek_hole": false, 00:25:53.526 "seek_data": false, 00:25:53.526 "copy": true, 00:25:53.526 "nvme_iov_md": false 00:25:53.526 }, 00:25:53.526 "memory_domains": [ 00:25:53.526 { 00:25:53.526 "dma_device_id": "system", 00:25:53.526 "dma_device_type": 1 00:25:53.526 }, 00:25:53.526 { 00:25:53.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.526 "dma_device_type": 2 00:25:53.526 } 00:25:53.526 ], 00:25:53.526 "driver_specific": {} 00:25:53.526 }' 00:25:53.526 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:53.526 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:53.526 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:53.526 18:52:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:53.526 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:53.526 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:53.526 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:53.783 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:54.040 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:54.040 "name": "BaseBdev2", 00:25:54.040 "aliases": [ 00:25:54.040 "6e024e59-d07a-423a-837d-df3e9d4dc144" 00:25:54.040 ], 00:25:54.040 "product_name": "Malloc disk", 00:25:54.040 "block_size": 512, 00:25:54.040 "num_blocks": 65536, 00:25:54.040 "uuid": "6e024e59-d07a-423a-837d-df3e9d4dc144", 00:25:54.040 "assigned_rate_limits": { 00:25:54.040 "rw_ios_per_sec": 0, 00:25:54.040 "rw_mbytes_per_sec": 0, 00:25:54.040 "r_mbytes_per_sec": 0, 00:25:54.040 "w_mbytes_per_sec": 0 00:25:54.040 }, 00:25:54.040 "claimed": true, 00:25:54.040 "claim_type": "exclusive_write", 00:25:54.040 "zoned": false, 00:25:54.040 "supported_io_types": { 00:25:54.040 "read": true, 00:25:54.040 "write": true, 00:25:54.040 "unmap": true, 00:25:54.040 "flush": true, 00:25:54.040 "reset": true, 00:25:54.040 "nvme_admin": false, 00:25:54.040 "nvme_io": false, 00:25:54.040 "nvme_io_md": false, 00:25:54.040 "write_zeroes": true, 00:25:54.041 "zcopy": true, 00:25:54.041 "get_zone_info": false, 00:25:54.041 "zone_management": false, 00:25:54.041 "zone_append": false, 00:25:54.041 "compare": false, 00:25:54.041 "compare_and_write": false, 00:25:54.041 "abort": true, 00:25:54.041 "seek_hole": false, 00:25:54.041 "seek_data": false, 00:25:54.041 "copy": true, 00:25:54.041 "nvme_iov_md": false 00:25:54.041 }, 00:25:54.041 "memory_domains": [ 00:25:54.041 { 00:25:54.041 "dma_device_id": "system", 00:25:54.041 "dma_device_type": 1 00:25:54.041 }, 00:25:54.041 { 00:25:54.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.041 "dma_device_type": 2 00:25:54.041 } 00:25:54.041 ], 00:25:54.041 "driver_specific": {} 00:25:54.041 }' 00:25:54.041 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:54.041 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:54.041 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:54.041 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:54.298 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:54.298 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:54.298 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.298 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.298 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:54.298 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:54.298 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:54.556 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:54.556 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:54.556 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:54.556 18:52:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:54.814 "name": "BaseBdev3", 00:25:54.814 "aliases": [ 00:25:54.814 "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2" 00:25:54.814 ], 00:25:54.814 "product_name": "Malloc disk", 00:25:54.814 "block_size": 512, 00:25:54.814 "num_blocks": 65536, 00:25:54.814 "uuid": "9b9d432f-60f4-42f5-b2e7-b9a1f33fbed2", 00:25:54.814 "assigned_rate_limits": { 00:25:54.814 "rw_ios_per_sec": 0, 00:25:54.814 "rw_mbytes_per_sec": 0, 00:25:54.814 "r_mbytes_per_sec": 0, 00:25:54.814 "w_mbytes_per_sec": 0 00:25:54.814 }, 00:25:54.814 "claimed": true, 00:25:54.814 "claim_type": "exclusive_write", 00:25:54.814 "zoned": false, 00:25:54.814 "supported_io_types": { 00:25:54.814 "read": true, 00:25:54.814 "write": true, 00:25:54.814 "unmap": true, 00:25:54.814 "flush": true, 00:25:54.814 "reset": true, 00:25:54.814 "nvme_admin": false, 00:25:54.814 "nvme_io": false, 00:25:54.814 "nvme_io_md": false, 00:25:54.814 "write_zeroes": true, 00:25:54.814 "zcopy": true, 00:25:54.814 "get_zone_info": false, 00:25:54.814 "zone_management": false, 00:25:54.814 "zone_append": false, 00:25:54.814 "compare": false, 00:25:54.814 "compare_and_write": false, 00:25:54.814 "abort": true, 00:25:54.814 "seek_hole": false, 00:25:54.814 "seek_data": false, 00:25:54.814 "copy": true, 00:25:54.814 "nvme_iov_md": false 00:25:54.814 }, 00:25:54.814 "memory_domains": [ 00:25:54.814 { 00:25:54.814 "dma_device_id": "system", 00:25:54.814 "dma_device_type": 1 00:25:54.814 }, 00:25:54.814 { 00:25:54.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.814 "dma_device_type": 2 00:25:54.814 } 00:25:54.814 ], 00:25:54.814 "driver_specific": {} 00:25:54.814 }' 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:54.814 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:55.072 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:55.072 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:55.072 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:55.072 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:55.072 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:55.331 "name": "BaseBdev4", 00:25:55.331 "aliases": [ 00:25:55.331 "040b9b65-7f47-4dc8-9c6d-7df6d8613246" 00:25:55.331 ], 00:25:55.331 "product_name": "Malloc disk", 00:25:55.331 "block_size": 512, 00:25:55.331 "num_blocks": 65536, 00:25:55.331 "uuid": "040b9b65-7f47-4dc8-9c6d-7df6d8613246", 00:25:55.331 "assigned_rate_limits": { 00:25:55.331 "rw_ios_per_sec": 0, 00:25:55.331 "rw_mbytes_per_sec": 0, 00:25:55.331 "r_mbytes_per_sec": 0, 00:25:55.331 "w_mbytes_per_sec": 0 00:25:55.331 }, 00:25:55.331 "claimed": true, 00:25:55.331 "claim_type": "exclusive_write", 00:25:55.331 "zoned": false, 00:25:55.331 "supported_io_types": { 00:25:55.331 "read": true, 00:25:55.331 "write": true, 00:25:55.331 "unmap": true, 00:25:55.331 "flush": true, 00:25:55.331 "reset": true, 00:25:55.331 "nvme_admin": false, 00:25:55.331 "nvme_io": false, 00:25:55.331 "nvme_io_md": false, 00:25:55.331 "write_zeroes": true, 00:25:55.331 "zcopy": true, 00:25:55.331 "get_zone_info": false, 00:25:55.331 "zone_management": false, 00:25:55.331 "zone_append": false, 00:25:55.331 "compare": false, 00:25:55.331 "compare_and_write": false, 00:25:55.331 "abort": true, 00:25:55.331 "seek_hole": false, 00:25:55.331 "seek_data": false, 00:25:55.331 "copy": true, 00:25:55.331 "nvme_iov_md": false 00:25:55.331 }, 00:25:55.331 "memory_domains": [ 00:25:55.331 { 00:25:55.331 "dma_device_id": "system", 00:25:55.331 "dma_device_type": 1 00:25:55.331 }, 00:25:55.331 { 00:25:55.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.331 "dma_device_type": 2 00:25:55.331 } 00:25:55.331 ], 00:25:55.331 "driver_specific": {} 00:25:55.331 }' 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:55.331 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:55.589 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:55.589 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:55.589 18:52:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:55.589 18:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:55.589 18:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:55.847 [2024-07-25 18:52:56.282520] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:55.847 [2024-07-25 18:52:56.282715] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:55.847 [2024-07-25 18:52:56.282929] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:55.847 [2024-07-25 18:52:56.283304] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:55.847 [2024-07-25 18:52:56.283410] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 139963 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 139963 ']' 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 139963 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139963 00:25:55.847 killing process with pid 139963 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139963' 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 139963 00:25:55.847 [2024-07-25 18:52:56.329876] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:55.847 18:52:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 139963 00:25:56.105 [2024-07-25 18:52:56.670757] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:57.478 00:25:57.478 real 0m31.808s 00:25:57.478 user 0m56.836s 00:25:57.478 sys 0m5.450s 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:57.478 ************************************ 00:25:57.478 END TEST raid_state_function_test 00:25:57.478 ************************************ 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.478 18:52:57 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:57.478 18:52:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:57.478 18:52:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:57.478 18:52:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:57.478 ************************************ 00:25:57.478 START TEST raid_state_function_test_sb 00:25:57.478 ************************************ 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=141033 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141033' 00:25:57.478 Process raid pid: 141033 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 141033 /var/tmp/spdk-raid.sock 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 141033 ']' 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:57.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:57.478 18:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.478 [2024-07-25 18:52:58.020568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:57.478 [2024-07-25 18:52:58.020978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.737 [2024-07-25 18:52:58.193324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.994 [2024-07-25 18:52:58.418480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.252 [2024-07-25 18:52:58.611873] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.510 18:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:58.510 18:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:25:58.510 18:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:58.768 [2024-07-25 18:52:59.113592] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:58.768 [2024-07-25 18:52:59.113944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:58.768 [2024-07-25 18:52:59.114037] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:58.768 [2024-07-25 18:52:59.114094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:58.768 [2024-07-25 18:52:59.114185] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:58.768 [2024-07-25 18:52:59.114237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:58.768 [2024-07-25 18:52:59.114309] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:58.768 [2024-07-25 18:52:59.114362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.768 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.026 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.026 "name": "Existed_Raid", 00:25:59.026 "uuid": "9651580d-fff0-409b-a01f-4c068282073c", 00:25:59.026 "strip_size_kb": 0, 00:25:59.026 "state": "configuring", 00:25:59.026 "raid_level": "raid1", 00:25:59.026 "superblock": true, 00:25:59.026 "num_base_bdevs": 4, 00:25:59.026 "num_base_bdevs_discovered": 0, 00:25:59.026 "num_base_bdevs_operational": 4, 00:25:59.026 "base_bdevs_list": [ 00:25:59.026 { 00:25:59.026 "name": "BaseBdev1", 00:25:59.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.026 "is_configured": false, 00:25:59.026 "data_offset": 0, 00:25:59.026 "data_size": 0 00:25:59.026 }, 00:25:59.026 { 00:25:59.026 "name": "BaseBdev2", 00:25:59.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.026 "is_configured": false, 00:25:59.026 "data_offset": 0, 00:25:59.026 "data_size": 0 00:25:59.026 }, 00:25:59.026 { 00:25:59.026 "name": "BaseBdev3", 00:25:59.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.026 "is_configured": false, 00:25:59.026 "data_offset": 0, 00:25:59.026 "data_size": 0 00:25:59.026 }, 00:25:59.026 { 00:25:59.026 "name": "BaseBdev4", 00:25:59.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.026 "is_configured": false, 00:25:59.026 "data_offset": 0, 00:25:59.026 "data_size": 0 00:25:59.026 } 00:25:59.026 ] 00:25:59.026 }' 00:25:59.026 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.026 18:52:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.592 18:52:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:59.592 [2024-07-25 18:53:00.161678] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:59.592 [2024-07-25 18:53:00.161912] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:25:59.850 18:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:00.108 [2024-07-25 18:53:00.429746] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:00.108 [2024-07-25 18:53:00.429998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:00.108 [2024-07-25 18:53:00.430077] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:00.108 [2024-07-25 18:53:00.430161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:00.108 [2024-07-25 18:53:00.430299] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:00.108 [2024-07-25 18:53:00.430376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:00.108 [2024-07-25 18:53:00.430480] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:00.108 [2024-07-25 18:53:00.430585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:00.108 [2024-07-25 18:53:00.641112] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:00.108 BaseBdev1 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:00.108 18:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:00.367 18:53:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:00.625 [ 00:26:00.625 { 00:26:00.625 "name": "BaseBdev1", 00:26:00.625 "aliases": [ 00:26:00.625 "11d1b695-9231-4040-b367-6446189b663e" 00:26:00.625 ], 00:26:00.625 "product_name": "Malloc disk", 00:26:00.625 "block_size": 512, 00:26:00.625 "num_blocks": 65536, 00:26:00.625 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:00.625 "assigned_rate_limits": { 00:26:00.625 "rw_ios_per_sec": 0, 00:26:00.625 "rw_mbytes_per_sec": 0, 00:26:00.625 "r_mbytes_per_sec": 0, 00:26:00.625 "w_mbytes_per_sec": 0 00:26:00.625 }, 00:26:00.625 "claimed": true, 00:26:00.625 "claim_type": "exclusive_write", 00:26:00.625 "zoned": false, 00:26:00.625 "supported_io_types": { 00:26:00.625 "read": true, 00:26:00.625 "write": true, 00:26:00.625 "unmap": true, 00:26:00.625 "flush": true, 00:26:00.625 "reset": true, 00:26:00.625 "nvme_admin": false, 00:26:00.625 "nvme_io": false, 00:26:00.625 "nvme_io_md": false, 00:26:00.625 "write_zeroes": true, 00:26:00.625 "zcopy": true, 00:26:00.625 "get_zone_info": false, 00:26:00.625 "zone_management": false, 00:26:00.625 "zone_append": false, 00:26:00.625 "compare": false, 00:26:00.625 "compare_and_write": false, 00:26:00.625 "abort": true, 00:26:00.625 "seek_hole": false, 00:26:00.625 "seek_data": false, 00:26:00.625 "copy": true, 00:26:00.625 "nvme_iov_md": false 00:26:00.625 }, 00:26:00.625 "memory_domains": [ 00:26:00.625 { 00:26:00.625 "dma_device_id": "system", 00:26:00.625 "dma_device_type": 1 00:26:00.625 }, 00:26:00.625 { 00:26:00.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.625 "dma_device_type": 2 00:26:00.625 } 00:26:00.625 ], 00:26:00.625 "driver_specific": {} 00:26:00.625 } 00:26:00.625 ] 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.625 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.883 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.883 "name": "Existed_Raid", 00:26:00.883 "uuid": "4fc267e5-2343-49e5-a88c-ecd82202eb9f", 00:26:00.883 "strip_size_kb": 0, 00:26:00.883 "state": "configuring", 00:26:00.883 "raid_level": "raid1", 00:26:00.883 "superblock": true, 00:26:00.883 "num_base_bdevs": 4, 00:26:00.883 "num_base_bdevs_discovered": 1, 00:26:00.883 "num_base_bdevs_operational": 4, 00:26:00.883 "base_bdevs_list": [ 00:26:00.883 { 00:26:00.883 "name": "BaseBdev1", 00:26:00.883 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:00.883 "is_configured": true, 00:26:00.883 "data_offset": 2048, 00:26:00.883 "data_size": 63488 00:26:00.883 }, 00:26:00.883 { 00:26:00.883 "name": "BaseBdev2", 00:26:00.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.883 "is_configured": false, 00:26:00.883 "data_offset": 0, 00:26:00.883 "data_size": 0 00:26:00.883 }, 00:26:00.883 { 00:26:00.883 "name": "BaseBdev3", 00:26:00.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.883 "is_configured": false, 00:26:00.883 "data_offset": 0, 00:26:00.883 "data_size": 0 00:26:00.883 }, 00:26:00.883 { 00:26:00.883 "name": "BaseBdev4", 00:26:00.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.883 "is_configured": false, 00:26:00.883 "data_offset": 0, 00:26:00.883 "data_size": 0 00:26:00.883 } 00:26:00.883 ] 00:26:00.883 }' 00:26:00.883 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.883 18:53:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.450 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:01.450 [2024-07-25 18:53:01.965378] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:01.450 [2024-07-25 18:53:01.965646] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:26:01.450 18:53:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:01.708 [2024-07-25 18:53:02.149474] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:01.708 [2024-07-25 18:53:02.151899] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:01.708 [2024-07-25 18:53:02.152077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:01.708 [2024-07-25 18:53:02.152229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:01.708 [2024-07-25 18:53:02.152292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:01.708 [2024-07-25 18:53:02.152374] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:01.708 [2024-07-25 18:53:02.152420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:01.708 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.709 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.967 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.967 "name": "Existed_Raid", 00:26:01.967 "uuid": "65f54719-2786-4fb3-b5a4-cd45392a40b6", 00:26:01.967 "strip_size_kb": 0, 00:26:01.967 "state": "configuring", 00:26:01.967 "raid_level": "raid1", 00:26:01.967 "superblock": true, 00:26:01.967 "num_base_bdevs": 4, 00:26:01.967 "num_base_bdevs_discovered": 1, 00:26:01.967 "num_base_bdevs_operational": 4, 00:26:01.967 "base_bdevs_list": [ 00:26:01.967 { 00:26:01.967 "name": "BaseBdev1", 00:26:01.967 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:01.967 "is_configured": true, 00:26:01.967 "data_offset": 2048, 00:26:01.967 "data_size": 63488 00:26:01.967 }, 00:26:01.967 { 00:26:01.967 "name": "BaseBdev2", 00:26:01.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.967 "is_configured": false, 00:26:01.967 "data_offset": 0, 00:26:01.967 "data_size": 0 00:26:01.967 }, 00:26:01.967 { 00:26:01.967 "name": "BaseBdev3", 00:26:01.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.967 "is_configured": false, 00:26:01.967 "data_offset": 0, 00:26:01.967 "data_size": 0 00:26:01.967 }, 00:26:01.967 { 00:26:01.967 "name": "BaseBdev4", 00:26:01.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.967 "is_configured": false, 00:26:01.967 "data_offset": 0, 00:26:01.967 "data_size": 0 00:26:01.967 } 00:26:01.967 ] 00:26:01.967 }' 00:26:01.967 18:53:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.967 18:53:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.534 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:02.793 [2024-07-25 18:53:03.366453] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:02.793 BaseBdev2 00:26:03.051 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:03.051 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:03.051 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:03.051 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:03.051 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:03.051 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:03.051 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:03.308 [ 00:26:03.308 { 00:26:03.308 "name": "BaseBdev2", 00:26:03.308 "aliases": [ 00:26:03.308 "cf91b6bd-6c54-44a5-8783-dea64dc72148" 00:26:03.308 ], 00:26:03.308 "product_name": "Malloc disk", 00:26:03.308 "block_size": 512, 00:26:03.308 "num_blocks": 65536, 00:26:03.308 "uuid": "cf91b6bd-6c54-44a5-8783-dea64dc72148", 00:26:03.308 "assigned_rate_limits": { 00:26:03.308 "rw_ios_per_sec": 0, 00:26:03.308 "rw_mbytes_per_sec": 0, 00:26:03.308 "r_mbytes_per_sec": 0, 00:26:03.308 "w_mbytes_per_sec": 0 00:26:03.308 }, 00:26:03.308 "claimed": true, 00:26:03.308 "claim_type": "exclusive_write", 00:26:03.308 "zoned": false, 00:26:03.308 "supported_io_types": { 00:26:03.308 "read": true, 00:26:03.308 "write": true, 00:26:03.308 "unmap": true, 00:26:03.308 "flush": true, 00:26:03.308 "reset": true, 00:26:03.308 "nvme_admin": false, 00:26:03.308 "nvme_io": false, 00:26:03.308 "nvme_io_md": false, 00:26:03.308 "write_zeroes": true, 00:26:03.308 "zcopy": true, 00:26:03.308 "get_zone_info": false, 00:26:03.308 "zone_management": false, 00:26:03.308 "zone_append": false, 00:26:03.308 "compare": false, 00:26:03.308 "compare_and_write": false, 00:26:03.308 "abort": true, 00:26:03.308 "seek_hole": false, 00:26:03.308 "seek_data": false, 00:26:03.308 "copy": true, 00:26:03.308 "nvme_iov_md": false 00:26:03.308 }, 00:26:03.308 "memory_domains": [ 00:26:03.308 { 00:26:03.308 "dma_device_id": "system", 00:26:03.308 "dma_device_type": 1 00:26:03.308 }, 00:26:03.308 { 00:26:03.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.308 "dma_device_type": 2 00:26:03.308 } 00:26:03.308 ], 00:26:03.308 "driver_specific": {} 00:26:03.308 } 00:26:03.308 ] 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.308 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.309 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.309 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.309 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.566 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.566 "name": "Existed_Raid", 00:26:03.566 "uuid": "65f54719-2786-4fb3-b5a4-cd45392a40b6", 00:26:03.566 "strip_size_kb": 0, 00:26:03.566 "state": "configuring", 00:26:03.566 "raid_level": "raid1", 00:26:03.566 "superblock": true, 00:26:03.566 "num_base_bdevs": 4, 00:26:03.566 "num_base_bdevs_discovered": 2, 00:26:03.566 "num_base_bdevs_operational": 4, 00:26:03.566 "base_bdevs_list": [ 00:26:03.566 { 00:26:03.566 "name": "BaseBdev1", 00:26:03.566 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:03.566 "is_configured": true, 00:26:03.566 "data_offset": 2048, 00:26:03.566 "data_size": 63488 00:26:03.566 }, 00:26:03.566 { 00:26:03.566 "name": "BaseBdev2", 00:26:03.566 "uuid": "cf91b6bd-6c54-44a5-8783-dea64dc72148", 00:26:03.566 "is_configured": true, 00:26:03.566 "data_offset": 2048, 00:26:03.566 "data_size": 63488 00:26:03.566 }, 00:26:03.566 { 00:26:03.566 "name": "BaseBdev3", 00:26:03.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.566 "is_configured": false, 00:26:03.566 "data_offset": 0, 00:26:03.566 "data_size": 0 00:26:03.566 }, 00:26:03.566 { 00:26:03.566 "name": "BaseBdev4", 00:26:03.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.566 "is_configured": false, 00:26:03.566 "data_offset": 0, 00:26:03.566 "data_size": 0 00:26:03.566 } 00:26:03.566 ] 00:26:03.566 }' 00:26:03.566 18:53:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.566 18:53:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.132 18:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:04.391 [2024-07-25 18:53:04.811277] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:04.391 BaseBdev3 00:26:04.391 18:53:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:04.391 18:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:04.391 18:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:04.391 18:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:04.391 18:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:04.391 18:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:04.391 18:53:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:04.649 18:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:04.908 [ 00:26:04.908 { 00:26:04.908 "name": "BaseBdev3", 00:26:04.908 "aliases": [ 00:26:04.908 "5bd04ac5-4b73-4706-8d52-e22efbbca533" 00:26:04.908 ], 00:26:04.908 "product_name": "Malloc disk", 00:26:04.908 "block_size": 512, 00:26:04.908 "num_blocks": 65536, 00:26:04.908 "uuid": "5bd04ac5-4b73-4706-8d52-e22efbbca533", 00:26:04.908 "assigned_rate_limits": { 00:26:04.908 "rw_ios_per_sec": 0, 00:26:04.908 "rw_mbytes_per_sec": 0, 00:26:04.908 "r_mbytes_per_sec": 0, 00:26:04.908 "w_mbytes_per_sec": 0 00:26:04.908 }, 00:26:04.908 "claimed": true, 00:26:04.908 "claim_type": "exclusive_write", 00:26:04.908 "zoned": false, 00:26:04.908 "supported_io_types": { 00:26:04.908 "read": true, 00:26:04.908 "write": true, 00:26:04.908 "unmap": true, 00:26:04.908 "flush": true, 00:26:04.908 "reset": true, 00:26:04.908 "nvme_admin": false, 00:26:04.908 "nvme_io": false, 00:26:04.908 "nvme_io_md": false, 00:26:04.908 "write_zeroes": true, 00:26:04.908 "zcopy": true, 00:26:04.908 "get_zone_info": false, 00:26:04.908 "zone_management": false, 00:26:04.908 "zone_append": false, 00:26:04.908 "compare": false, 00:26:04.908 "compare_and_write": false, 00:26:04.908 "abort": true, 00:26:04.908 "seek_hole": false, 00:26:04.908 "seek_data": false, 00:26:04.908 "copy": true, 00:26:04.908 "nvme_iov_md": false 00:26:04.908 }, 00:26:04.908 "memory_domains": [ 00:26:04.908 { 00:26:04.908 "dma_device_id": "system", 00:26:04.908 "dma_device_type": 1 00:26:04.908 }, 00:26:04.908 { 00:26:04.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.908 "dma_device_type": 2 00:26:04.908 } 00:26:04.908 ], 00:26:04.908 "driver_specific": {} 00:26:04.908 } 00:26:04.908 ] 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.908 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.908 "name": "Existed_Raid", 00:26:04.908 "uuid": "65f54719-2786-4fb3-b5a4-cd45392a40b6", 00:26:04.908 "strip_size_kb": 0, 00:26:04.908 "state": "configuring", 00:26:04.908 "raid_level": "raid1", 00:26:04.908 "superblock": true, 00:26:04.908 "num_base_bdevs": 4, 00:26:04.908 "num_base_bdevs_discovered": 3, 00:26:04.908 "num_base_bdevs_operational": 4, 00:26:04.908 "base_bdevs_list": [ 00:26:04.908 { 00:26:04.909 "name": "BaseBdev1", 00:26:04.909 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:04.909 "is_configured": true, 00:26:04.909 "data_offset": 2048, 00:26:04.909 "data_size": 63488 00:26:04.909 }, 00:26:04.909 { 00:26:04.909 "name": "BaseBdev2", 00:26:04.909 "uuid": "cf91b6bd-6c54-44a5-8783-dea64dc72148", 00:26:04.909 "is_configured": true, 00:26:04.909 "data_offset": 2048, 00:26:04.909 "data_size": 63488 00:26:04.909 }, 00:26:04.909 { 00:26:04.909 "name": "BaseBdev3", 00:26:04.909 "uuid": "5bd04ac5-4b73-4706-8d52-e22efbbca533", 00:26:04.909 "is_configured": true, 00:26:04.909 "data_offset": 2048, 00:26:04.909 "data_size": 63488 00:26:04.909 }, 00:26:04.909 { 00:26:04.909 "name": "BaseBdev4", 00:26:04.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.909 "is_configured": false, 00:26:04.909 "data_offset": 0, 00:26:04.909 "data_size": 0 00:26:04.909 } 00:26:04.909 ] 00:26:04.909 }' 00:26:04.909 18:53:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.909 18:53:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:05.843 [2024-07-25 18:53:06.271876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:05.843 [2024-07-25 18:53:06.272426] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:26:05.843 [2024-07-25 18:53:06.272538] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:05.843 [2024-07-25 18:53:06.272694] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:05.843 [2024-07-25 18:53:06.273223] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:26:05.843 [2024-07-25 18:53:06.273335] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:26:05.843 BaseBdev4 00:26:05.843 [2024-07-25 18:53:06.273592] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:05.843 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:06.102 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:06.360 [ 00:26:06.360 { 00:26:06.360 "name": "BaseBdev4", 00:26:06.360 "aliases": [ 00:26:06.360 "f2644509-de54-456f-af94-51939636850e" 00:26:06.360 ], 00:26:06.360 "product_name": "Malloc disk", 00:26:06.360 "block_size": 512, 00:26:06.360 "num_blocks": 65536, 00:26:06.360 "uuid": "f2644509-de54-456f-af94-51939636850e", 00:26:06.360 "assigned_rate_limits": { 00:26:06.360 "rw_ios_per_sec": 0, 00:26:06.360 "rw_mbytes_per_sec": 0, 00:26:06.360 "r_mbytes_per_sec": 0, 00:26:06.360 "w_mbytes_per_sec": 0 00:26:06.360 }, 00:26:06.360 "claimed": true, 00:26:06.360 "claim_type": "exclusive_write", 00:26:06.360 "zoned": false, 00:26:06.360 "supported_io_types": { 00:26:06.360 "read": true, 00:26:06.360 "write": true, 00:26:06.360 "unmap": true, 00:26:06.360 "flush": true, 00:26:06.360 "reset": true, 00:26:06.360 "nvme_admin": false, 00:26:06.360 "nvme_io": false, 00:26:06.360 "nvme_io_md": false, 00:26:06.360 "write_zeroes": true, 00:26:06.360 "zcopy": true, 00:26:06.360 "get_zone_info": false, 00:26:06.360 "zone_management": false, 00:26:06.360 "zone_append": false, 00:26:06.360 "compare": false, 00:26:06.360 "compare_and_write": false, 00:26:06.360 "abort": true, 00:26:06.360 "seek_hole": false, 00:26:06.360 "seek_data": false, 00:26:06.360 "copy": true, 00:26:06.360 "nvme_iov_md": false 00:26:06.360 }, 00:26:06.360 "memory_domains": [ 00:26:06.360 { 00:26:06.360 "dma_device_id": "system", 00:26:06.360 "dma_device_type": 1 00:26:06.360 }, 00:26:06.360 { 00:26:06.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.360 "dma_device_type": 2 00:26:06.360 } 00:26:06.360 ], 00:26:06.360 "driver_specific": {} 00:26:06.360 } 00:26:06.360 ] 00:26:06.360 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:06.360 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.361 "name": "Existed_Raid", 00:26:06.361 "uuid": "65f54719-2786-4fb3-b5a4-cd45392a40b6", 00:26:06.361 "strip_size_kb": 0, 00:26:06.361 "state": "online", 00:26:06.361 "raid_level": "raid1", 00:26:06.361 "superblock": true, 00:26:06.361 "num_base_bdevs": 4, 00:26:06.361 "num_base_bdevs_discovered": 4, 00:26:06.361 "num_base_bdevs_operational": 4, 00:26:06.361 "base_bdevs_list": [ 00:26:06.361 { 00:26:06.361 "name": "BaseBdev1", 00:26:06.361 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:06.361 "is_configured": true, 00:26:06.361 "data_offset": 2048, 00:26:06.361 "data_size": 63488 00:26:06.361 }, 00:26:06.361 { 00:26:06.361 "name": "BaseBdev2", 00:26:06.361 "uuid": "cf91b6bd-6c54-44a5-8783-dea64dc72148", 00:26:06.361 "is_configured": true, 00:26:06.361 "data_offset": 2048, 00:26:06.361 "data_size": 63488 00:26:06.361 }, 00:26:06.361 { 00:26:06.361 "name": "BaseBdev3", 00:26:06.361 "uuid": "5bd04ac5-4b73-4706-8d52-e22efbbca533", 00:26:06.361 "is_configured": true, 00:26:06.361 "data_offset": 2048, 00:26:06.361 "data_size": 63488 00:26:06.361 }, 00:26:06.361 { 00:26:06.361 "name": "BaseBdev4", 00:26:06.361 "uuid": "f2644509-de54-456f-af94-51939636850e", 00:26:06.361 "is_configured": true, 00:26:06.361 "data_offset": 2048, 00:26:06.361 "data_size": 63488 00:26:06.361 } 00:26:06.361 ] 00:26:06.361 }' 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.361 18:53:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.928 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:06.928 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:06.928 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:06.928 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:06.929 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:06.929 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:06.929 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:06.929 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:07.187 [2024-07-25 18:53:07.684458] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:07.187 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:07.187 "name": "Existed_Raid", 00:26:07.187 "aliases": [ 00:26:07.187 "65f54719-2786-4fb3-b5a4-cd45392a40b6" 00:26:07.187 ], 00:26:07.187 "product_name": "Raid Volume", 00:26:07.187 "block_size": 512, 00:26:07.187 "num_blocks": 63488, 00:26:07.187 "uuid": "65f54719-2786-4fb3-b5a4-cd45392a40b6", 00:26:07.187 "assigned_rate_limits": { 00:26:07.187 "rw_ios_per_sec": 0, 00:26:07.187 "rw_mbytes_per_sec": 0, 00:26:07.187 "r_mbytes_per_sec": 0, 00:26:07.187 "w_mbytes_per_sec": 0 00:26:07.187 }, 00:26:07.187 "claimed": false, 00:26:07.187 "zoned": false, 00:26:07.187 "supported_io_types": { 00:26:07.187 "read": true, 00:26:07.187 "write": true, 00:26:07.187 "unmap": false, 00:26:07.187 "flush": false, 00:26:07.187 "reset": true, 00:26:07.187 "nvme_admin": false, 00:26:07.187 "nvme_io": false, 00:26:07.187 "nvme_io_md": false, 00:26:07.187 "write_zeroes": true, 00:26:07.187 "zcopy": false, 00:26:07.187 "get_zone_info": false, 00:26:07.187 "zone_management": false, 00:26:07.187 "zone_append": false, 00:26:07.187 "compare": false, 00:26:07.187 "compare_and_write": false, 00:26:07.187 "abort": false, 00:26:07.187 "seek_hole": false, 00:26:07.187 "seek_data": false, 00:26:07.187 "copy": false, 00:26:07.187 "nvme_iov_md": false 00:26:07.187 }, 00:26:07.187 "memory_domains": [ 00:26:07.187 { 00:26:07.187 "dma_device_id": "system", 00:26:07.187 "dma_device_type": 1 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.187 "dma_device_type": 2 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "dma_device_id": "system", 00:26:07.187 "dma_device_type": 1 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.187 "dma_device_type": 2 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "dma_device_id": "system", 00:26:07.187 "dma_device_type": 1 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.187 "dma_device_type": 2 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "dma_device_id": "system", 00:26:07.187 "dma_device_type": 1 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.187 "dma_device_type": 2 00:26:07.187 } 00:26:07.187 ], 00:26:07.187 "driver_specific": { 00:26:07.187 "raid": { 00:26:07.187 "uuid": "65f54719-2786-4fb3-b5a4-cd45392a40b6", 00:26:07.187 "strip_size_kb": 0, 00:26:07.187 "state": "online", 00:26:07.187 "raid_level": "raid1", 00:26:07.187 "superblock": true, 00:26:07.187 "num_base_bdevs": 4, 00:26:07.187 "num_base_bdevs_discovered": 4, 00:26:07.187 "num_base_bdevs_operational": 4, 00:26:07.187 "base_bdevs_list": [ 00:26:07.187 { 00:26:07.187 "name": "BaseBdev1", 00:26:07.187 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:07.187 "is_configured": true, 00:26:07.187 "data_offset": 2048, 00:26:07.187 "data_size": 63488 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "name": "BaseBdev2", 00:26:07.187 "uuid": "cf91b6bd-6c54-44a5-8783-dea64dc72148", 00:26:07.187 "is_configured": true, 00:26:07.187 "data_offset": 2048, 00:26:07.187 "data_size": 63488 00:26:07.187 }, 00:26:07.187 { 00:26:07.187 "name": "BaseBdev3", 00:26:07.187 "uuid": "5bd04ac5-4b73-4706-8d52-e22efbbca533", 00:26:07.187 "is_configured": true, 00:26:07.187 "data_offset": 2048, 00:26:07.187 "data_size": 63488 00:26:07.187 }, 00:26:07.187 { 00:26:07.188 "name": "BaseBdev4", 00:26:07.188 "uuid": "f2644509-de54-456f-af94-51939636850e", 00:26:07.188 "is_configured": true, 00:26:07.188 "data_offset": 2048, 00:26:07.188 "data_size": 63488 00:26:07.188 } 00:26:07.188 ] 00:26:07.188 } 00:26:07.188 } 00:26:07.188 }' 00:26:07.188 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:07.188 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:07.188 BaseBdev2 00:26:07.188 BaseBdev3 00:26:07.188 BaseBdev4' 00:26:07.188 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:07.188 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:07.188 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:07.446 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:07.446 "name": "BaseBdev1", 00:26:07.446 "aliases": [ 00:26:07.446 "11d1b695-9231-4040-b367-6446189b663e" 00:26:07.446 ], 00:26:07.446 "product_name": "Malloc disk", 00:26:07.446 "block_size": 512, 00:26:07.446 "num_blocks": 65536, 00:26:07.446 "uuid": "11d1b695-9231-4040-b367-6446189b663e", 00:26:07.446 "assigned_rate_limits": { 00:26:07.446 "rw_ios_per_sec": 0, 00:26:07.446 "rw_mbytes_per_sec": 0, 00:26:07.446 "r_mbytes_per_sec": 0, 00:26:07.446 "w_mbytes_per_sec": 0 00:26:07.446 }, 00:26:07.446 "claimed": true, 00:26:07.446 "claim_type": "exclusive_write", 00:26:07.446 "zoned": false, 00:26:07.446 "supported_io_types": { 00:26:07.446 "read": true, 00:26:07.446 "write": true, 00:26:07.446 "unmap": true, 00:26:07.446 "flush": true, 00:26:07.446 "reset": true, 00:26:07.446 "nvme_admin": false, 00:26:07.446 "nvme_io": false, 00:26:07.446 "nvme_io_md": false, 00:26:07.446 "write_zeroes": true, 00:26:07.446 "zcopy": true, 00:26:07.446 "get_zone_info": false, 00:26:07.446 "zone_management": false, 00:26:07.446 "zone_append": false, 00:26:07.446 "compare": false, 00:26:07.446 "compare_and_write": false, 00:26:07.446 "abort": true, 00:26:07.446 "seek_hole": false, 00:26:07.446 "seek_data": false, 00:26:07.446 "copy": true, 00:26:07.446 "nvme_iov_md": false 00:26:07.446 }, 00:26:07.446 "memory_domains": [ 00:26:07.446 { 00:26:07.446 "dma_device_id": "system", 00:26:07.446 "dma_device_type": 1 00:26:07.446 }, 00:26:07.446 { 00:26:07.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.446 "dma_device_type": 2 00:26:07.446 } 00:26:07.446 ], 00:26:07.446 "driver_specific": {} 00:26:07.446 }' 00:26:07.446 18:53:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.446 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:07.705 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:07.964 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:07.964 "name": "BaseBdev2", 00:26:07.964 "aliases": [ 00:26:07.964 "cf91b6bd-6c54-44a5-8783-dea64dc72148" 00:26:07.964 ], 00:26:07.964 "product_name": "Malloc disk", 00:26:07.964 "block_size": 512, 00:26:07.964 "num_blocks": 65536, 00:26:07.964 "uuid": "cf91b6bd-6c54-44a5-8783-dea64dc72148", 00:26:07.964 "assigned_rate_limits": { 00:26:07.964 "rw_ios_per_sec": 0, 00:26:07.964 "rw_mbytes_per_sec": 0, 00:26:07.964 "r_mbytes_per_sec": 0, 00:26:07.964 "w_mbytes_per_sec": 0 00:26:07.964 }, 00:26:07.964 "claimed": true, 00:26:07.964 "claim_type": "exclusive_write", 00:26:07.964 "zoned": false, 00:26:07.964 "supported_io_types": { 00:26:07.964 "read": true, 00:26:07.964 "write": true, 00:26:07.964 "unmap": true, 00:26:07.964 "flush": true, 00:26:07.964 "reset": true, 00:26:07.964 "nvme_admin": false, 00:26:07.964 "nvme_io": false, 00:26:07.964 "nvme_io_md": false, 00:26:07.964 "write_zeroes": true, 00:26:07.964 "zcopy": true, 00:26:07.964 "get_zone_info": false, 00:26:07.964 "zone_management": false, 00:26:07.964 "zone_append": false, 00:26:07.964 "compare": false, 00:26:07.964 "compare_and_write": false, 00:26:07.964 "abort": true, 00:26:07.964 "seek_hole": false, 00:26:07.964 "seek_data": false, 00:26:07.964 "copy": true, 00:26:07.964 "nvme_iov_md": false 00:26:07.964 }, 00:26:07.964 "memory_domains": [ 00:26:07.964 { 00:26:07.964 "dma_device_id": "system", 00:26:07.964 "dma_device_type": 1 00:26:07.964 }, 00:26:07.964 { 00:26:07.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.964 "dma_device_type": 2 00:26:07.964 } 00:26:07.964 ], 00:26:07.964 "driver_specific": {} 00:26:07.964 }' 00:26:07.964 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.964 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:07.964 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:08.222 18:53:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.479 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:08.479 "name": "BaseBdev3", 00:26:08.479 "aliases": [ 00:26:08.479 "5bd04ac5-4b73-4706-8d52-e22efbbca533" 00:26:08.479 ], 00:26:08.479 "product_name": "Malloc disk", 00:26:08.479 "block_size": 512, 00:26:08.479 "num_blocks": 65536, 00:26:08.479 "uuid": "5bd04ac5-4b73-4706-8d52-e22efbbca533", 00:26:08.479 "assigned_rate_limits": { 00:26:08.479 "rw_ios_per_sec": 0, 00:26:08.479 "rw_mbytes_per_sec": 0, 00:26:08.479 "r_mbytes_per_sec": 0, 00:26:08.479 "w_mbytes_per_sec": 0 00:26:08.479 }, 00:26:08.479 "claimed": true, 00:26:08.479 "claim_type": "exclusive_write", 00:26:08.479 "zoned": false, 00:26:08.479 "supported_io_types": { 00:26:08.479 "read": true, 00:26:08.479 "write": true, 00:26:08.479 "unmap": true, 00:26:08.479 "flush": true, 00:26:08.479 "reset": true, 00:26:08.479 "nvme_admin": false, 00:26:08.479 "nvme_io": false, 00:26:08.479 "nvme_io_md": false, 00:26:08.479 "write_zeroes": true, 00:26:08.479 "zcopy": true, 00:26:08.479 "get_zone_info": false, 00:26:08.479 "zone_management": false, 00:26:08.479 "zone_append": false, 00:26:08.479 "compare": false, 00:26:08.479 "compare_and_write": false, 00:26:08.479 "abort": true, 00:26:08.479 "seek_hole": false, 00:26:08.479 "seek_data": false, 00:26:08.479 "copy": true, 00:26:08.479 "nvme_iov_md": false 00:26:08.479 }, 00:26:08.479 "memory_domains": [ 00:26:08.479 { 00:26:08.479 "dma_device_id": "system", 00:26:08.479 "dma_device_type": 1 00:26:08.479 }, 00:26:08.479 { 00:26:08.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.479 "dma_device_type": 2 00:26:08.479 } 00:26:08.479 ], 00:26:08.479 "driver_specific": {} 00:26:08.479 }' 00:26:08.479 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.736 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.994 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:08.994 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:08.994 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.994 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:09.251 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:09.251 "name": "BaseBdev4", 00:26:09.251 "aliases": [ 00:26:09.251 "f2644509-de54-456f-af94-51939636850e" 00:26:09.251 ], 00:26:09.251 "product_name": "Malloc disk", 00:26:09.251 "block_size": 512, 00:26:09.251 "num_blocks": 65536, 00:26:09.251 "uuid": "f2644509-de54-456f-af94-51939636850e", 00:26:09.251 "assigned_rate_limits": { 00:26:09.251 "rw_ios_per_sec": 0, 00:26:09.251 "rw_mbytes_per_sec": 0, 00:26:09.251 "r_mbytes_per_sec": 0, 00:26:09.251 "w_mbytes_per_sec": 0 00:26:09.251 }, 00:26:09.251 "claimed": true, 00:26:09.251 "claim_type": "exclusive_write", 00:26:09.251 "zoned": false, 00:26:09.251 "supported_io_types": { 00:26:09.251 "read": true, 00:26:09.251 "write": true, 00:26:09.251 "unmap": true, 00:26:09.251 "flush": true, 00:26:09.251 "reset": true, 00:26:09.251 "nvme_admin": false, 00:26:09.251 "nvme_io": false, 00:26:09.251 "nvme_io_md": false, 00:26:09.251 "write_zeroes": true, 00:26:09.251 "zcopy": true, 00:26:09.251 "get_zone_info": false, 00:26:09.251 "zone_management": false, 00:26:09.251 "zone_append": false, 00:26:09.251 "compare": false, 00:26:09.251 "compare_and_write": false, 00:26:09.251 "abort": true, 00:26:09.251 "seek_hole": false, 00:26:09.251 "seek_data": false, 00:26:09.251 "copy": true, 00:26:09.251 "nvme_iov_md": false 00:26:09.251 }, 00:26:09.251 "memory_domains": [ 00:26:09.251 { 00:26:09.251 "dma_device_id": "system", 00:26:09.251 "dma_device_type": 1 00:26:09.251 }, 00:26:09.251 { 00:26:09.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.251 "dma_device_type": 2 00:26:09.251 } 00:26:09.251 ], 00:26:09.251 "driver_specific": {} 00:26:09.251 }' 00:26:09.251 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.251 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.252 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:09.252 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.252 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.252 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.252 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.509 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.509 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.509 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.509 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.509 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.509 18:53:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:09.767 [2024-07-25 18:53:10.200769] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.767 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.024 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:10.024 "name": "Existed_Raid", 00:26:10.024 "uuid": "65f54719-2786-4fb3-b5a4-cd45392a40b6", 00:26:10.024 "strip_size_kb": 0, 00:26:10.024 "state": "online", 00:26:10.024 "raid_level": "raid1", 00:26:10.024 "superblock": true, 00:26:10.024 "num_base_bdevs": 4, 00:26:10.024 "num_base_bdevs_discovered": 3, 00:26:10.024 "num_base_bdevs_operational": 3, 00:26:10.024 "base_bdevs_list": [ 00:26:10.024 { 00:26:10.024 "name": null, 00:26:10.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.024 "is_configured": false, 00:26:10.024 "data_offset": 2048, 00:26:10.024 "data_size": 63488 00:26:10.024 }, 00:26:10.024 { 00:26:10.024 "name": "BaseBdev2", 00:26:10.024 "uuid": "cf91b6bd-6c54-44a5-8783-dea64dc72148", 00:26:10.024 "is_configured": true, 00:26:10.024 "data_offset": 2048, 00:26:10.024 "data_size": 63488 00:26:10.024 }, 00:26:10.024 { 00:26:10.024 "name": "BaseBdev3", 00:26:10.024 "uuid": "5bd04ac5-4b73-4706-8d52-e22efbbca533", 00:26:10.024 "is_configured": true, 00:26:10.024 "data_offset": 2048, 00:26:10.024 "data_size": 63488 00:26:10.024 }, 00:26:10.024 { 00:26:10.024 "name": "BaseBdev4", 00:26:10.024 "uuid": "f2644509-de54-456f-af94-51939636850e", 00:26:10.024 "is_configured": true, 00:26:10.024 "data_offset": 2048, 00:26:10.024 "data_size": 63488 00:26:10.024 } 00:26:10.024 ] 00:26:10.024 }' 00:26:10.024 18:53:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:10.024 18:53:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.590 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:10.590 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:10.590 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:10.590 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.856 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:10.856 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:10.856 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:11.138 [2024-07-25 18:53:11.588526] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:11.138 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:11.138 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:11.138 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.138 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:11.412 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:11.412 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:11.412 18:53:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:11.669 [2024-07-25 18:53:12.081988] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:11.669 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:11.669 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:11.669 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.669 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:11.926 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:11.926 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:11.926 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:12.183 [2024-07-25 18:53:12.628120] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:12.184 [2024-07-25 18:53:12.628404] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:12.184 [2024-07-25 18:53:12.714276] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:12.184 [2024-07-25 18:53:12.714542] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:12.184 [2024-07-25 18:53:12.714618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:26:12.184 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:12.184 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:12.184 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.184 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:12.442 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:12.442 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:12.442 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:12.442 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:12.442 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:12.442 18:53:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:12.700 BaseBdev2 00:26:12.700 18:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:12.700 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:12.700 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:12.700 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:12.700 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:12.700 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:12.700 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.959 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:12.959 [ 00:26:12.959 { 00:26:12.959 "name": "BaseBdev2", 00:26:12.959 "aliases": [ 00:26:12.959 "bb13872e-036e-4aaa-95a0-f743d197f3d6" 00:26:12.959 ], 00:26:12.959 "product_name": "Malloc disk", 00:26:12.959 "block_size": 512, 00:26:12.959 "num_blocks": 65536, 00:26:12.959 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:12.959 "assigned_rate_limits": { 00:26:12.959 "rw_ios_per_sec": 0, 00:26:12.959 "rw_mbytes_per_sec": 0, 00:26:12.959 "r_mbytes_per_sec": 0, 00:26:12.959 "w_mbytes_per_sec": 0 00:26:12.959 }, 00:26:12.959 "claimed": false, 00:26:12.959 "zoned": false, 00:26:12.959 "supported_io_types": { 00:26:12.959 "read": true, 00:26:12.959 "write": true, 00:26:12.959 "unmap": true, 00:26:12.959 "flush": true, 00:26:12.959 "reset": true, 00:26:12.959 "nvme_admin": false, 00:26:12.959 "nvme_io": false, 00:26:12.959 "nvme_io_md": false, 00:26:12.959 "write_zeroes": true, 00:26:12.959 "zcopy": true, 00:26:12.959 "get_zone_info": false, 00:26:12.959 "zone_management": false, 00:26:12.959 "zone_append": false, 00:26:12.959 "compare": false, 00:26:12.959 "compare_and_write": false, 00:26:12.959 "abort": true, 00:26:12.959 "seek_hole": false, 00:26:12.959 "seek_data": false, 00:26:12.959 "copy": true, 00:26:12.959 "nvme_iov_md": false 00:26:12.959 }, 00:26:12.959 "memory_domains": [ 00:26:12.959 { 00:26:12.959 "dma_device_id": "system", 00:26:12.959 "dma_device_type": 1 00:26:12.959 }, 00:26:12.959 { 00:26:12.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.959 "dma_device_type": 2 00:26:12.959 } 00:26:12.959 ], 00:26:12.959 "driver_specific": {} 00:26:12.959 } 00:26:12.959 ] 00:26:12.959 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:12.959 18:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:12.959 18:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:12.959 18:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:13.217 BaseBdev3 00:26:13.217 18:53:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:13.217 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:13.217 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:13.217 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:13.217 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:13.217 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:13.217 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:13.475 18:53:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:13.475 [ 00:26:13.475 { 00:26:13.475 "name": "BaseBdev3", 00:26:13.475 "aliases": [ 00:26:13.475 "1614b579-b9e7-4a9c-a5a8-af72e7a26689" 00:26:13.475 ], 00:26:13.475 "product_name": "Malloc disk", 00:26:13.475 "block_size": 512, 00:26:13.475 "num_blocks": 65536, 00:26:13.475 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:13.475 "assigned_rate_limits": { 00:26:13.475 "rw_ios_per_sec": 0, 00:26:13.475 "rw_mbytes_per_sec": 0, 00:26:13.475 "r_mbytes_per_sec": 0, 00:26:13.475 "w_mbytes_per_sec": 0 00:26:13.475 }, 00:26:13.475 "claimed": false, 00:26:13.475 "zoned": false, 00:26:13.475 "supported_io_types": { 00:26:13.475 "read": true, 00:26:13.475 "write": true, 00:26:13.475 "unmap": true, 00:26:13.475 "flush": true, 00:26:13.475 "reset": true, 00:26:13.475 "nvme_admin": false, 00:26:13.475 "nvme_io": false, 00:26:13.475 "nvme_io_md": false, 00:26:13.475 "write_zeroes": true, 00:26:13.475 "zcopy": true, 00:26:13.475 "get_zone_info": false, 00:26:13.475 "zone_management": false, 00:26:13.475 "zone_append": false, 00:26:13.475 "compare": false, 00:26:13.475 "compare_and_write": false, 00:26:13.475 "abort": true, 00:26:13.475 "seek_hole": false, 00:26:13.475 "seek_data": false, 00:26:13.475 "copy": true, 00:26:13.475 "nvme_iov_md": false 00:26:13.475 }, 00:26:13.475 "memory_domains": [ 00:26:13.475 { 00:26:13.475 "dma_device_id": "system", 00:26:13.475 "dma_device_type": 1 00:26:13.475 }, 00:26:13.475 { 00:26:13.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.475 "dma_device_type": 2 00:26:13.475 } 00:26:13.475 ], 00:26:13.475 "driver_specific": {} 00:26:13.475 } 00:26:13.475 ] 00:26:13.475 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:13.475 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:13.475 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:13.475 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:13.733 BaseBdev4 00:26:13.733 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:13.733 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:13.733 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:13.733 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:13.733 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:13.733 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:13.734 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:13.992 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:14.250 [ 00:26:14.250 { 00:26:14.250 "name": "BaseBdev4", 00:26:14.250 "aliases": [ 00:26:14.250 "3e361741-f715-4637-9ae8-77e1e2799106" 00:26:14.250 ], 00:26:14.250 "product_name": "Malloc disk", 00:26:14.250 "block_size": 512, 00:26:14.250 "num_blocks": 65536, 00:26:14.250 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:14.250 "assigned_rate_limits": { 00:26:14.250 "rw_ios_per_sec": 0, 00:26:14.250 "rw_mbytes_per_sec": 0, 00:26:14.250 "r_mbytes_per_sec": 0, 00:26:14.250 "w_mbytes_per_sec": 0 00:26:14.250 }, 00:26:14.250 "claimed": false, 00:26:14.250 "zoned": false, 00:26:14.250 "supported_io_types": { 00:26:14.250 "read": true, 00:26:14.250 "write": true, 00:26:14.250 "unmap": true, 00:26:14.250 "flush": true, 00:26:14.250 "reset": true, 00:26:14.250 "nvme_admin": false, 00:26:14.250 "nvme_io": false, 00:26:14.250 "nvme_io_md": false, 00:26:14.250 "write_zeroes": true, 00:26:14.250 "zcopy": true, 00:26:14.250 "get_zone_info": false, 00:26:14.250 "zone_management": false, 00:26:14.250 "zone_append": false, 00:26:14.250 "compare": false, 00:26:14.250 "compare_and_write": false, 00:26:14.250 "abort": true, 00:26:14.250 "seek_hole": false, 00:26:14.250 "seek_data": false, 00:26:14.250 "copy": true, 00:26:14.250 "nvme_iov_md": false 00:26:14.250 }, 00:26:14.250 "memory_domains": [ 00:26:14.250 { 00:26:14.250 "dma_device_id": "system", 00:26:14.250 "dma_device_type": 1 00:26:14.250 }, 00:26:14.250 { 00:26:14.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.250 "dma_device_type": 2 00:26:14.250 } 00:26:14.250 ], 00:26:14.250 "driver_specific": {} 00:26:14.250 } 00:26:14.250 ] 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:14.250 [2024-07-25 18:53:14.768375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:14.250 [2024-07-25 18:53:14.768603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:14.250 [2024-07-25 18:53:14.768768] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:14.250 [2024-07-25 18:53:14.771104] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:14.250 [2024-07-25 18:53:14.771275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.250 18:53:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.508 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.508 "name": "Existed_Raid", 00:26:14.508 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:14.508 "strip_size_kb": 0, 00:26:14.508 "state": "configuring", 00:26:14.508 "raid_level": "raid1", 00:26:14.508 "superblock": true, 00:26:14.508 "num_base_bdevs": 4, 00:26:14.508 "num_base_bdevs_discovered": 3, 00:26:14.508 "num_base_bdevs_operational": 4, 00:26:14.508 "base_bdevs_list": [ 00:26:14.508 { 00:26:14.508 "name": "BaseBdev1", 00:26:14.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.508 "is_configured": false, 00:26:14.508 "data_offset": 0, 00:26:14.508 "data_size": 0 00:26:14.508 }, 00:26:14.508 { 00:26:14.508 "name": "BaseBdev2", 00:26:14.508 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:14.508 "is_configured": true, 00:26:14.508 "data_offset": 2048, 00:26:14.508 "data_size": 63488 00:26:14.508 }, 00:26:14.508 { 00:26:14.508 "name": "BaseBdev3", 00:26:14.508 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:14.508 "is_configured": true, 00:26:14.508 "data_offset": 2048, 00:26:14.508 "data_size": 63488 00:26:14.508 }, 00:26:14.508 { 00:26:14.508 "name": "BaseBdev4", 00:26:14.508 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:14.508 "is_configured": true, 00:26:14.508 "data_offset": 2048, 00:26:14.508 "data_size": 63488 00:26:14.508 } 00:26:14.508 ] 00:26:14.508 }' 00:26:14.508 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.508 18:53:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.074 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:15.332 [2024-07-25 18:53:15.744548] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.332 18:53:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.590 18:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.590 "name": "Existed_Raid", 00:26:15.590 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:15.590 "strip_size_kb": 0, 00:26:15.590 "state": "configuring", 00:26:15.590 "raid_level": "raid1", 00:26:15.590 "superblock": true, 00:26:15.590 "num_base_bdevs": 4, 00:26:15.590 "num_base_bdevs_discovered": 2, 00:26:15.590 "num_base_bdevs_operational": 4, 00:26:15.590 "base_bdevs_list": [ 00:26:15.590 { 00:26:15.590 "name": "BaseBdev1", 00:26:15.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.590 "is_configured": false, 00:26:15.590 "data_offset": 0, 00:26:15.590 "data_size": 0 00:26:15.590 }, 00:26:15.590 { 00:26:15.590 "name": null, 00:26:15.590 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:15.590 "is_configured": false, 00:26:15.590 "data_offset": 2048, 00:26:15.590 "data_size": 63488 00:26:15.591 }, 00:26:15.591 { 00:26:15.591 "name": "BaseBdev3", 00:26:15.591 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:15.591 "is_configured": true, 00:26:15.591 "data_offset": 2048, 00:26:15.591 "data_size": 63488 00:26:15.591 }, 00:26:15.591 { 00:26:15.591 "name": "BaseBdev4", 00:26:15.591 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:15.591 "is_configured": true, 00:26:15.591 "data_offset": 2048, 00:26:15.591 "data_size": 63488 00:26:15.591 } 00:26:15.591 ] 00:26:15.591 }' 00:26:15.591 18:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.591 18:53:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.157 18:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.157 18:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:16.414 18:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:16.414 18:53:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:16.671 [2024-07-25 18:53:17.037447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:16.671 BaseBdev1 00:26:16.671 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:16.671 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:16.671 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:16.671 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:16.671 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:16.671 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:16.671 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:16.928 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:17.186 [ 00:26:17.186 { 00:26:17.186 "name": "BaseBdev1", 00:26:17.186 "aliases": [ 00:26:17.186 "32d3dafb-4677-437e-b60c-7d6faf8929b1" 00:26:17.186 ], 00:26:17.186 "product_name": "Malloc disk", 00:26:17.186 "block_size": 512, 00:26:17.186 "num_blocks": 65536, 00:26:17.186 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:17.186 "assigned_rate_limits": { 00:26:17.186 "rw_ios_per_sec": 0, 00:26:17.186 "rw_mbytes_per_sec": 0, 00:26:17.186 "r_mbytes_per_sec": 0, 00:26:17.186 "w_mbytes_per_sec": 0 00:26:17.186 }, 00:26:17.186 "claimed": true, 00:26:17.186 "claim_type": "exclusive_write", 00:26:17.186 "zoned": false, 00:26:17.186 "supported_io_types": { 00:26:17.186 "read": true, 00:26:17.186 "write": true, 00:26:17.186 "unmap": true, 00:26:17.186 "flush": true, 00:26:17.186 "reset": true, 00:26:17.186 "nvme_admin": false, 00:26:17.186 "nvme_io": false, 00:26:17.186 "nvme_io_md": false, 00:26:17.186 "write_zeroes": true, 00:26:17.186 "zcopy": true, 00:26:17.186 "get_zone_info": false, 00:26:17.186 "zone_management": false, 00:26:17.186 "zone_append": false, 00:26:17.186 "compare": false, 00:26:17.186 "compare_and_write": false, 00:26:17.186 "abort": true, 00:26:17.186 "seek_hole": false, 00:26:17.186 "seek_data": false, 00:26:17.186 "copy": true, 00:26:17.186 "nvme_iov_md": false 00:26:17.186 }, 00:26:17.186 "memory_domains": [ 00:26:17.186 { 00:26:17.186 "dma_device_id": "system", 00:26:17.186 "dma_device_type": 1 00:26:17.186 }, 00:26:17.186 { 00:26:17.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.186 "dma_device_type": 2 00:26:17.186 } 00:26:17.186 ], 00:26:17.186 "driver_specific": {} 00:26:17.186 } 00:26:17.186 ] 00:26:17.186 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.187 "name": "Existed_Raid", 00:26:17.187 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:17.187 "strip_size_kb": 0, 00:26:17.187 "state": "configuring", 00:26:17.187 "raid_level": "raid1", 00:26:17.187 "superblock": true, 00:26:17.187 "num_base_bdevs": 4, 00:26:17.187 "num_base_bdevs_discovered": 3, 00:26:17.187 "num_base_bdevs_operational": 4, 00:26:17.187 "base_bdevs_list": [ 00:26:17.187 { 00:26:17.187 "name": "BaseBdev1", 00:26:17.187 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:17.187 "is_configured": true, 00:26:17.187 "data_offset": 2048, 00:26:17.187 "data_size": 63488 00:26:17.187 }, 00:26:17.187 { 00:26:17.187 "name": null, 00:26:17.187 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:17.187 "is_configured": false, 00:26:17.187 "data_offset": 2048, 00:26:17.187 "data_size": 63488 00:26:17.187 }, 00:26:17.187 { 00:26:17.187 "name": "BaseBdev3", 00:26:17.187 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:17.187 "is_configured": true, 00:26:17.187 "data_offset": 2048, 00:26:17.187 "data_size": 63488 00:26:17.187 }, 00:26:17.187 { 00:26:17.187 "name": "BaseBdev4", 00:26:17.187 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:17.187 "is_configured": true, 00:26:17.187 "data_offset": 2048, 00:26:17.187 "data_size": 63488 00:26:17.187 } 00:26:17.187 ] 00:26:17.187 }' 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.187 18:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.753 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.753 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:18.011 [2024-07-25 18:53:18.545784] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.011 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.268 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:18.268 "name": "Existed_Raid", 00:26:18.268 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:18.268 "strip_size_kb": 0, 00:26:18.268 "state": "configuring", 00:26:18.268 "raid_level": "raid1", 00:26:18.268 "superblock": true, 00:26:18.268 "num_base_bdevs": 4, 00:26:18.268 "num_base_bdevs_discovered": 2, 00:26:18.268 "num_base_bdevs_operational": 4, 00:26:18.268 "base_bdevs_list": [ 00:26:18.268 { 00:26:18.268 "name": "BaseBdev1", 00:26:18.268 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:18.268 "is_configured": true, 00:26:18.268 "data_offset": 2048, 00:26:18.268 "data_size": 63488 00:26:18.268 }, 00:26:18.268 { 00:26:18.268 "name": null, 00:26:18.268 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:18.268 "is_configured": false, 00:26:18.268 "data_offset": 2048, 00:26:18.268 "data_size": 63488 00:26:18.268 }, 00:26:18.268 { 00:26:18.268 "name": null, 00:26:18.268 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:18.268 "is_configured": false, 00:26:18.268 "data_offset": 2048, 00:26:18.268 "data_size": 63488 00:26:18.268 }, 00:26:18.268 { 00:26:18.268 "name": "BaseBdev4", 00:26:18.268 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:18.268 "is_configured": true, 00:26:18.268 "data_offset": 2048, 00:26:18.268 "data_size": 63488 00:26:18.268 } 00:26:18.268 ] 00:26:18.268 }' 00:26:18.269 18:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:18.269 18:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.834 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:18.834 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.092 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:19.092 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:19.350 [2024-07-25 18:53:19.682615] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.350 "name": "Existed_Raid", 00:26:19.350 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:19.350 "strip_size_kb": 0, 00:26:19.350 "state": "configuring", 00:26:19.350 "raid_level": "raid1", 00:26:19.350 "superblock": true, 00:26:19.350 "num_base_bdevs": 4, 00:26:19.350 "num_base_bdevs_discovered": 3, 00:26:19.350 "num_base_bdevs_operational": 4, 00:26:19.350 "base_bdevs_list": [ 00:26:19.350 { 00:26:19.350 "name": "BaseBdev1", 00:26:19.350 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:19.350 "is_configured": true, 00:26:19.350 "data_offset": 2048, 00:26:19.350 "data_size": 63488 00:26:19.350 }, 00:26:19.350 { 00:26:19.350 "name": null, 00:26:19.350 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:19.350 "is_configured": false, 00:26:19.350 "data_offset": 2048, 00:26:19.350 "data_size": 63488 00:26:19.350 }, 00:26:19.350 { 00:26:19.350 "name": "BaseBdev3", 00:26:19.350 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:19.350 "is_configured": true, 00:26:19.350 "data_offset": 2048, 00:26:19.350 "data_size": 63488 00:26:19.350 }, 00:26:19.350 { 00:26:19.350 "name": "BaseBdev4", 00:26:19.350 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:19.350 "is_configured": true, 00:26:19.350 "data_offset": 2048, 00:26:19.350 "data_size": 63488 00:26:19.350 } 00:26:19.350 ] 00:26:19.350 }' 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.350 18:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.916 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.916 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:20.174 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:20.174 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:20.432 [2024-07-25 18:53:20.838835] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.432 18:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.690 18:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.690 "name": "Existed_Raid", 00:26:20.690 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:20.690 "strip_size_kb": 0, 00:26:20.690 "state": "configuring", 00:26:20.690 "raid_level": "raid1", 00:26:20.690 "superblock": true, 00:26:20.690 "num_base_bdevs": 4, 00:26:20.690 "num_base_bdevs_discovered": 2, 00:26:20.690 "num_base_bdevs_operational": 4, 00:26:20.690 "base_bdevs_list": [ 00:26:20.690 { 00:26:20.690 "name": null, 00:26:20.690 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:20.690 "is_configured": false, 00:26:20.690 "data_offset": 2048, 00:26:20.690 "data_size": 63488 00:26:20.690 }, 00:26:20.690 { 00:26:20.690 "name": null, 00:26:20.690 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:20.690 "is_configured": false, 00:26:20.690 "data_offset": 2048, 00:26:20.690 "data_size": 63488 00:26:20.690 }, 00:26:20.690 { 00:26:20.690 "name": "BaseBdev3", 00:26:20.690 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:20.690 "is_configured": true, 00:26:20.690 "data_offset": 2048, 00:26:20.690 "data_size": 63488 00:26:20.690 }, 00:26:20.690 { 00:26:20.690 "name": "BaseBdev4", 00:26:20.690 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:20.690 "is_configured": true, 00:26:20.690 "data_offset": 2048, 00:26:20.690 "data_size": 63488 00:26:20.690 } 00:26:20.690 ] 00:26:20.690 }' 00:26:20.690 18:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.690 18:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:21.257 18:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.257 18:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:21.515 18:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:21.515 18:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:21.515 [2024-07-25 18:53:22.007598] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:21.515 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:21.515 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:21.515 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:21.515 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.515 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.516 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:21.516 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.516 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.516 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.516 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.516 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.516 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.774 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.774 "name": "Existed_Raid", 00:26:21.774 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:21.774 "strip_size_kb": 0, 00:26:21.774 "state": "configuring", 00:26:21.774 "raid_level": "raid1", 00:26:21.774 "superblock": true, 00:26:21.774 "num_base_bdevs": 4, 00:26:21.774 "num_base_bdevs_discovered": 3, 00:26:21.774 "num_base_bdevs_operational": 4, 00:26:21.774 "base_bdevs_list": [ 00:26:21.774 { 00:26:21.774 "name": null, 00:26:21.774 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:21.774 "is_configured": false, 00:26:21.774 "data_offset": 2048, 00:26:21.774 "data_size": 63488 00:26:21.774 }, 00:26:21.774 { 00:26:21.774 "name": "BaseBdev2", 00:26:21.774 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:21.774 "is_configured": true, 00:26:21.774 "data_offset": 2048, 00:26:21.774 "data_size": 63488 00:26:21.774 }, 00:26:21.774 { 00:26:21.774 "name": "BaseBdev3", 00:26:21.774 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:21.774 "is_configured": true, 00:26:21.774 "data_offset": 2048, 00:26:21.774 "data_size": 63488 00:26:21.774 }, 00:26:21.774 { 00:26:21.774 "name": "BaseBdev4", 00:26:21.774 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:21.774 "is_configured": true, 00:26:21.774 "data_offset": 2048, 00:26:21.774 "data_size": 63488 00:26:21.774 } 00:26:21.774 ] 00:26:21.774 }' 00:26:21.774 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.774 18:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:22.341 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.341 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:22.599 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:22.599 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.599 18:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:22.857 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 32d3dafb-4677-437e-b60c-7d6faf8929b1 00:26:23.113 [2024-07-25 18:53:23.496801] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:23.113 [2024-07-25 18:53:23.497301] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:26:23.113 [2024-07-25 18:53:23.497416] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:23.113 [2024-07-25 18:53:23.497552] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:23.113 [2024-07-25 18:53:23.498012] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:26:23.113 [2024-07-25 18:53:23.498126] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:26:23.113 NewBaseBdev 00:26:23.113 [2024-07-25 18:53:23.498338] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.113 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:23.113 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:23.113 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:23.113 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:23.113 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:23.113 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:23.113 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:23.371 [ 00:26:23.371 { 00:26:23.371 "name": "NewBaseBdev", 00:26:23.371 "aliases": [ 00:26:23.371 "32d3dafb-4677-437e-b60c-7d6faf8929b1" 00:26:23.371 ], 00:26:23.371 "product_name": "Malloc disk", 00:26:23.371 "block_size": 512, 00:26:23.371 "num_blocks": 65536, 00:26:23.371 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:23.371 "assigned_rate_limits": { 00:26:23.371 "rw_ios_per_sec": 0, 00:26:23.371 "rw_mbytes_per_sec": 0, 00:26:23.371 "r_mbytes_per_sec": 0, 00:26:23.371 "w_mbytes_per_sec": 0 00:26:23.371 }, 00:26:23.371 "claimed": true, 00:26:23.371 "claim_type": "exclusive_write", 00:26:23.371 "zoned": false, 00:26:23.371 "supported_io_types": { 00:26:23.371 "read": true, 00:26:23.371 "write": true, 00:26:23.371 "unmap": true, 00:26:23.371 "flush": true, 00:26:23.371 "reset": true, 00:26:23.371 "nvme_admin": false, 00:26:23.371 "nvme_io": false, 00:26:23.371 "nvme_io_md": false, 00:26:23.371 "write_zeroes": true, 00:26:23.371 "zcopy": true, 00:26:23.371 "get_zone_info": false, 00:26:23.371 "zone_management": false, 00:26:23.371 "zone_append": false, 00:26:23.371 "compare": false, 00:26:23.371 "compare_and_write": false, 00:26:23.371 "abort": true, 00:26:23.371 "seek_hole": false, 00:26:23.371 "seek_data": false, 00:26:23.371 "copy": true, 00:26:23.371 "nvme_iov_md": false 00:26:23.371 }, 00:26:23.371 "memory_domains": [ 00:26:23.371 { 00:26:23.371 "dma_device_id": "system", 00:26:23.371 "dma_device_type": 1 00:26:23.371 }, 00:26:23.371 { 00:26:23.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.371 "dma_device_type": 2 00:26:23.371 } 00:26:23.371 ], 00:26:23.371 "driver_specific": {} 00:26:23.371 } 00:26:23.371 ] 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.371 18:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.629 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.629 "name": "Existed_Raid", 00:26:23.629 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:23.629 "strip_size_kb": 0, 00:26:23.629 "state": "online", 00:26:23.629 "raid_level": "raid1", 00:26:23.629 "superblock": true, 00:26:23.629 "num_base_bdevs": 4, 00:26:23.629 "num_base_bdevs_discovered": 4, 00:26:23.629 "num_base_bdevs_operational": 4, 00:26:23.629 "base_bdevs_list": [ 00:26:23.629 { 00:26:23.629 "name": "NewBaseBdev", 00:26:23.629 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:23.629 "is_configured": true, 00:26:23.629 "data_offset": 2048, 00:26:23.629 "data_size": 63488 00:26:23.629 }, 00:26:23.629 { 00:26:23.629 "name": "BaseBdev2", 00:26:23.629 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:23.629 "is_configured": true, 00:26:23.629 "data_offset": 2048, 00:26:23.629 "data_size": 63488 00:26:23.629 }, 00:26:23.629 { 00:26:23.629 "name": "BaseBdev3", 00:26:23.629 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:23.629 "is_configured": true, 00:26:23.629 "data_offset": 2048, 00:26:23.629 "data_size": 63488 00:26:23.629 }, 00:26:23.629 { 00:26:23.629 "name": "BaseBdev4", 00:26:23.629 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:23.629 "is_configured": true, 00:26:23.629 "data_offset": 2048, 00:26:23.629 "data_size": 63488 00:26:23.630 } 00:26:23.630 ] 00:26:23.630 }' 00:26:23.630 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.630 18:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:24.195 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:24.453 [2024-07-25 18:53:24.918676] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:24.453 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:24.453 "name": "Existed_Raid", 00:26:24.453 "aliases": [ 00:26:24.453 "4d0f8417-c4b6-43da-be39-1ac21b1e6708" 00:26:24.453 ], 00:26:24.453 "product_name": "Raid Volume", 00:26:24.453 "block_size": 512, 00:26:24.453 "num_blocks": 63488, 00:26:24.453 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:24.453 "assigned_rate_limits": { 00:26:24.453 "rw_ios_per_sec": 0, 00:26:24.453 "rw_mbytes_per_sec": 0, 00:26:24.453 "r_mbytes_per_sec": 0, 00:26:24.453 "w_mbytes_per_sec": 0 00:26:24.453 }, 00:26:24.453 "claimed": false, 00:26:24.453 "zoned": false, 00:26:24.453 "supported_io_types": { 00:26:24.453 "read": true, 00:26:24.453 "write": true, 00:26:24.453 "unmap": false, 00:26:24.453 "flush": false, 00:26:24.453 "reset": true, 00:26:24.453 "nvme_admin": false, 00:26:24.453 "nvme_io": false, 00:26:24.453 "nvme_io_md": false, 00:26:24.453 "write_zeroes": true, 00:26:24.453 "zcopy": false, 00:26:24.453 "get_zone_info": false, 00:26:24.453 "zone_management": false, 00:26:24.453 "zone_append": false, 00:26:24.453 "compare": false, 00:26:24.453 "compare_and_write": false, 00:26:24.453 "abort": false, 00:26:24.453 "seek_hole": false, 00:26:24.453 "seek_data": false, 00:26:24.453 "copy": false, 00:26:24.453 "nvme_iov_md": false 00:26:24.453 }, 00:26:24.453 "memory_domains": [ 00:26:24.453 { 00:26:24.453 "dma_device_id": "system", 00:26:24.453 "dma_device_type": 1 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.453 "dma_device_type": 2 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "dma_device_id": "system", 00:26:24.453 "dma_device_type": 1 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.453 "dma_device_type": 2 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "dma_device_id": "system", 00:26:24.453 "dma_device_type": 1 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.453 "dma_device_type": 2 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "dma_device_id": "system", 00:26:24.453 "dma_device_type": 1 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.453 "dma_device_type": 2 00:26:24.453 } 00:26:24.453 ], 00:26:24.453 "driver_specific": { 00:26:24.453 "raid": { 00:26:24.453 "uuid": "4d0f8417-c4b6-43da-be39-1ac21b1e6708", 00:26:24.453 "strip_size_kb": 0, 00:26:24.453 "state": "online", 00:26:24.453 "raid_level": "raid1", 00:26:24.453 "superblock": true, 00:26:24.453 "num_base_bdevs": 4, 00:26:24.453 "num_base_bdevs_discovered": 4, 00:26:24.453 "num_base_bdevs_operational": 4, 00:26:24.453 "base_bdevs_list": [ 00:26:24.453 { 00:26:24.453 "name": "NewBaseBdev", 00:26:24.453 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:24.453 "is_configured": true, 00:26:24.453 "data_offset": 2048, 00:26:24.453 "data_size": 63488 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "name": "BaseBdev2", 00:26:24.453 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:24.453 "is_configured": true, 00:26:24.453 "data_offset": 2048, 00:26:24.453 "data_size": 63488 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "name": "BaseBdev3", 00:26:24.453 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:24.453 "is_configured": true, 00:26:24.453 "data_offset": 2048, 00:26:24.453 "data_size": 63488 00:26:24.453 }, 00:26:24.453 { 00:26:24.453 "name": "BaseBdev4", 00:26:24.453 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:24.453 "is_configured": true, 00:26:24.453 "data_offset": 2048, 00:26:24.453 "data_size": 63488 00:26:24.453 } 00:26:24.453 ] 00:26:24.453 } 00:26:24.453 } 00:26:24.453 }' 00:26:24.454 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:24.454 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:24.454 BaseBdev2 00:26:24.454 BaseBdev3 00:26:24.454 BaseBdev4' 00:26:24.454 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:24.454 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:24.454 18:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:24.712 "name": "NewBaseBdev", 00:26:24.712 "aliases": [ 00:26:24.712 "32d3dafb-4677-437e-b60c-7d6faf8929b1" 00:26:24.712 ], 00:26:24.712 "product_name": "Malloc disk", 00:26:24.712 "block_size": 512, 00:26:24.712 "num_blocks": 65536, 00:26:24.712 "uuid": "32d3dafb-4677-437e-b60c-7d6faf8929b1", 00:26:24.712 "assigned_rate_limits": { 00:26:24.712 "rw_ios_per_sec": 0, 00:26:24.712 "rw_mbytes_per_sec": 0, 00:26:24.712 "r_mbytes_per_sec": 0, 00:26:24.712 "w_mbytes_per_sec": 0 00:26:24.712 }, 00:26:24.712 "claimed": true, 00:26:24.712 "claim_type": "exclusive_write", 00:26:24.712 "zoned": false, 00:26:24.712 "supported_io_types": { 00:26:24.712 "read": true, 00:26:24.712 "write": true, 00:26:24.712 "unmap": true, 00:26:24.712 "flush": true, 00:26:24.712 "reset": true, 00:26:24.712 "nvme_admin": false, 00:26:24.712 "nvme_io": false, 00:26:24.712 "nvme_io_md": false, 00:26:24.712 "write_zeroes": true, 00:26:24.712 "zcopy": true, 00:26:24.712 "get_zone_info": false, 00:26:24.712 "zone_management": false, 00:26:24.712 "zone_append": false, 00:26:24.712 "compare": false, 00:26:24.712 "compare_and_write": false, 00:26:24.712 "abort": true, 00:26:24.712 "seek_hole": false, 00:26:24.712 "seek_data": false, 00:26:24.712 "copy": true, 00:26:24.712 "nvme_iov_md": false 00:26:24.712 }, 00:26:24.712 "memory_domains": [ 00:26:24.712 { 00:26:24.712 "dma_device_id": "system", 00:26:24.712 "dma_device_type": 1 00:26:24.712 }, 00:26:24.712 { 00:26:24.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.712 "dma_device_type": 2 00:26:24.712 } 00:26:24.712 ], 00:26:24.712 "driver_specific": {} 00:26:24.712 }' 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:24.712 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:24.971 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:25.229 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:25.229 "name": "BaseBdev2", 00:26:25.229 "aliases": [ 00:26:25.229 "bb13872e-036e-4aaa-95a0-f743d197f3d6" 00:26:25.229 ], 00:26:25.229 "product_name": "Malloc disk", 00:26:25.229 "block_size": 512, 00:26:25.229 "num_blocks": 65536, 00:26:25.229 "uuid": "bb13872e-036e-4aaa-95a0-f743d197f3d6", 00:26:25.229 "assigned_rate_limits": { 00:26:25.229 "rw_ios_per_sec": 0, 00:26:25.229 "rw_mbytes_per_sec": 0, 00:26:25.229 "r_mbytes_per_sec": 0, 00:26:25.229 "w_mbytes_per_sec": 0 00:26:25.229 }, 00:26:25.229 "claimed": true, 00:26:25.229 "claim_type": "exclusive_write", 00:26:25.229 "zoned": false, 00:26:25.229 "supported_io_types": { 00:26:25.229 "read": true, 00:26:25.229 "write": true, 00:26:25.229 "unmap": true, 00:26:25.229 "flush": true, 00:26:25.229 "reset": true, 00:26:25.229 "nvme_admin": false, 00:26:25.229 "nvme_io": false, 00:26:25.229 "nvme_io_md": false, 00:26:25.229 "write_zeroes": true, 00:26:25.229 "zcopy": true, 00:26:25.229 "get_zone_info": false, 00:26:25.229 "zone_management": false, 00:26:25.229 "zone_append": false, 00:26:25.229 "compare": false, 00:26:25.229 "compare_and_write": false, 00:26:25.229 "abort": true, 00:26:25.229 "seek_hole": false, 00:26:25.229 "seek_data": false, 00:26:25.229 "copy": true, 00:26:25.229 "nvme_iov_md": false 00:26:25.229 }, 00:26:25.229 "memory_domains": [ 00:26:25.229 { 00:26:25.229 "dma_device_id": "system", 00:26:25.229 "dma_device_type": 1 00:26:25.229 }, 00:26:25.229 { 00:26:25.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.229 "dma_device_type": 2 00:26:25.229 } 00:26:25.229 ], 00:26:25.229 "driver_specific": {} 00:26:25.229 }' 00:26:25.229 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.229 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.229 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:25.229 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.229 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:25.487 18:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:25.745 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:25.745 "name": "BaseBdev3", 00:26:25.745 "aliases": [ 00:26:25.745 "1614b579-b9e7-4a9c-a5a8-af72e7a26689" 00:26:25.745 ], 00:26:25.745 "product_name": "Malloc disk", 00:26:25.745 "block_size": 512, 00:26:25.745 "num_blocks": 65536, 00:26:25.745 "uuid": "1614b579-b9e7-4a9c-a5a8-af72e7a26689", 00:26:25.745 "assigned_rate_limits": { 00:26:25.745 "rw_ios_per_sec": 0, 00:26:25.745 "rw_mbytes_per_sec": 0, 00:26:25.745 "r_mbytes_per_sec": 0, 00:26:25.745 "w_mbytes_per_sec": 0 00:26:25.745 }, 00:26:25.745 "claimed": true, 00:26:25.745 "claim_type": "exclusive_write", 00:26:25.745 "zoned": false, 00:26:25.745 "supported_io_types": { 00:26:25.745 "read": true, 00:26:25.745 "write": true, 00:26:25.745 "unmap": true, 00:26:25.745 "flush": true, 00:26:25.745 "reset": true, 00:26:25.745 "nvme_admin": false, 00:26:25.745 "nvme_io": false, 00:26:25.745 "nvme_io_md": false, 00:26:25.745 "write_zeroes": true, 00:26:25.745 "zcopy": true, 00:26:25.745 "get_zone_info": false, 00:26:25.745 "zone_management": false, 00:26:25.745 "zone_append": false, 00:26:25.745 "compare": false, 00:26:25.745 "compare_and_write": false, 00:26:25.745 "abort": true, 00:26:25.745 "seek_hole": false, 00:26:25.745 "seek_data": false, 00:26:25.745 "copy": true, 00:26:25.745 "nvme_iov_md": false 00:26:25.745 }, 00:26:25.745 "memory_domains": [ 00:26:25.745 { 00:26:25.745 "dma_device_id": "system", 00:26:25.745 "dma_device_type": 1 00:26:25.745 }, 00:26:25.745 { 00:26:25.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.745 "dma_device_type": 2 00:26:25.745 } 00:26:25.745 ], 00:26:25.745 "driver_specific": {} 00:26:25.745 }' 00:26:25.745 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.745 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.745 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:25.745 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:26.004 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:26.263 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:26.263 "name": "BaseBdev4", 00:26:26.263 "aliases": [ 00:26:26.263 "3e361741-f715-4637-9ae8-77e1e2799106" 00:26:26.263 ], 00:26:26.263 "product_name": "Malloc disk", 00:26:26.263 "block_size": 512, 00:26:26.263 "num_blocks": 65536, 00:26:26.263 "uuid": "3e361741-f715-4637-9ae8-77e1e2799106", 00:26:26.263 "assigned_rate_limits": { 00:26:26.263 "rw_ios_per_sec": 0, 00:26:26.263 "rw_mbytes_per_sec": 0, 00:26:26.263 "r_mbytes_per_sec": 0, 00:26:26.263 "w_mbytes_per_sec": 0 00:26:26.263 }, 00:26:26.263 "claimed": true, 00:26:26.263 "claim_type": "exclusive_write", 00:26:26.263 "zoned": false, 00:26:26.263 "supported_io_types": { 00:26:26.263 "read": true, 00:26:26.263 "write": true, 00:26:26.263 "unmap": true, 00:26:26.263 "flush": true, 00:26:26.263 "reset": true, 00:26:26.263 "nvme_admin": false, 00:26:26.263 "nvme_io": false, 00:26:26.263 "nvme_io_md": false, 00:26:26.263 "write_zeroes": true, 00:26:26.263 "zcopy": true, 00:26:26.263 "get_zone_info": false, 00:26:26.263 "zone_management": false, 00:26:26.263 "zone_append": false, 00:26:26.263 "compare": false, 00:26:26.263 "compare_and_write": false, 00:26:26.263 "abort": true, 00:26:26.263 "seek_hole": false, 00:26:26.263 "seek_data": false, 00:26:26.263 "copy": true, 00:26:26.263 "nvme_iov_md": false 00:26:26.263 }, 00:26:26.263 "memory_domains": [ 00:26:26.263 { 00:26:26.263 "dma_device_id": "system", 00:26:26.263 "dma_device_type": 1 00:26:26.263 }, 00:26:26.263 { 00:26:26.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.263 "dma_device_type": 2 00:26:26.263 } 00:26:26.263 ], 00:26:26.263 "driver_specific": {} 00:26:26.263 }' 00:26:26.521 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.521 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.521 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:26.521 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.521 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.521 18:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:26.521 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.521 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.521 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:26.779 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.779 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.779 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:26.779 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:27.038 [2024-07-25 18:53:27.431170] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:27.038 [2024-07-25 18:53:27.431375] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:27.038 [2024-07-25 18:53:27.431541] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:27.038 [2024-07-25 18:53:27.431877] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:27.038 [2024-07-25 18:53:27.432003] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 141033 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 141033 ']' 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 141033 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 141033 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 141033' 00:26:27.038 killing process with pid 141033 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 141033 00:26:27.038 [2024-07-25 18:53:27.491584] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:27.038 18:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 141033 00:26:27.296 [2024-07-25 18:53:27.836978] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:28.672 18:53:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:28.672 ************************************ 00:26:28.672 END TEST raid_state_function_test_sb 00:26:28.672 ************************************ 00:26:28.672 00:26:28.672 real 0m31.084s 00:26:28.672 user 0m55.635s 00:26:28.672 sys 0m5.102s 00:26:28.672 18:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:28.672 18:53:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 18:53:29 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:28.672 18:53:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:28.672 18:53:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:28.672 18:53:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 ************************************ 00:26:28.672 START TEST raid_superblock_test 00:26:28.672 ************************************ 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=142112 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 142112 /var/tmp/spdk-raid.sock 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 142112 ']' 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:28.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.672 18:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.672 [2024-07-25 18:53:29.184570] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:28.672 [2024-07-25 18:53:29.184987] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142112 ] 00:26:28.930 [2024-07-25 18:53:29.375575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.189 [2024-07-25 18:53:29.635223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.447 [2024-07-25 18:53:29.822534] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:29.743 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:30.001 malloc1 00:26:30.001 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:30.259 [2024-07-25 18:53:30.614155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:30.259 [2024-07-25 18:53:30.614464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.259 [2024-07-25 18:53:30.614644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:30.259 [2024-07-25 18:53:30.614788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.259 [2024-07-25 18:53:30.617640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.259 [2024-07-25 18:53:30.617815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:30.259 pt1 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:30.259 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:30.515 malloc2 00:26:30.515 18:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:30.773 [2024-07-25 18:53:31.121379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:30.773 [2024-07-25 18:53:31.121721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.773 [2024-07-25 18:53:31.121825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:30.773 [2024-07-25 18:53:31.122167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.773 [2024-07-25 18:53:31.124867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.773 [2024-07-25 18:53:31.125035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:30.773 pt2 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:30.773 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:31.030 malloc3 00:26:31.030 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:31.288 [2024-07-25 18:53:31.642973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:31.288 [2024-07-25 18:53:31.643309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.288 [2024-07-25 18:53:31.643402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:31.288 [2024-07-25 18:53:31.643668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.288 [2024-07-25 18:53:31.646339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.288 [2024-07-25 18:53:31.646510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:31.288 pt3 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:31.288 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:31.288 malloc4 00:26:31.545 18:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:31.545 [2024-07-25 18:53:32.088647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:31.545 [2024-07-25 18:53:32.088973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.545 [2024-07-25 18:53:32.089049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:31.545 [2024-07-25 18:53:32.089149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.545 [2024-07-25 18:53:32.091792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.545 [2024-07-25 18:53:32.091950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:31.545 pt4 00:26:31.545 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:31.545 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:31.546 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:31.803 [2024-07-25 18:53:32.320833] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:31.803 [2024-07-25 18:53:32.323150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:31.803 [2024-07-25 18:53:32.323335] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:31.803 [2024-07-25 18:53:32.323441] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:31.803 [2024-07-25 18:53:32.323703] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:26:31.803 [2024-07-25 18:53:32.323801] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:31.803 [2024-07-25 18:53:32.323987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:31.803 [2024-07-25 18:53:32.324455] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:26:31.803 [2024-07-25 18:53:32.324554] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:26:31.803 [2024-07-25 18:53:32.324844] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.803 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.061 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.061 "name": "raid_bdev1", 00:26:32.061 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:32.061 "strip_size_kb": 0, 00:26:32.061 "state": "online", 00:26:32.061 "raid_level": "raid1", 00:26:32.061 "superblock": true, 00:26:32.061 "num_base_bdevs": 4, 00:26:32.061 "num_base_bdevs_discovered": 4, 00:26:32.061 "num_base_bdevs_operational": 4, 00:26:32.061 "base_bdevs_list": [ 00:26:32.061 { 00:26:32.061 "name": "pt1", 00:26:32.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.061 "is_configured": true, 00:26:32.061 "data_offset": 2048, 00:26:32.061 "data_size": 63488 00:26:32.061 }, 00:26:32.061 { 00:26:32.061 "name": "pt2", 00:26:32.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.061 "is_configured": true, 00:26:32.061 "data_offset": 2048, 00:26:32.061 "data_size": 63488 00:26:32.061 }, 00:26:32.061 { 00:26:32.061 "name": "pt3", 00:26:32.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.061 "is_configured": true, 00:26:32.061 "data_offset": 2048, 00:26:32.061 "data_size": 63488 00:26:32.061 }, 00:26:32.061 { 00:26:32.061 "name": "pt4", 00:26:32.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.061 "is_configured": true, 00:26:32.061 "data_offset": 2048, 00:26:32.061 "data_size": 63488 00:26:32.061 } 00:26:32.061 ] 00:26:32.061 }' 00:26:32.061 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.061 18:53:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:32.627 18:53:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:32.627 [2024-07-25 18:53:33.149351] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.627 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:32.627 "name": "raid_bdev1", 00:26:32.627 "aliases": [ 00:26:32.627 "00ebe3d8-995d-47a0-895f-a08901817da6" 00:26:32.627 ], 00:26:32.628 "product_name": "Raid Volume", 00:26:32.628 "block_size": 512, 00:26:32.628 "num_blocks": 63488, 00:26:32.628 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:32.628 "assigned_rate_limits": { 00:26:32.628 "rw_ios_per_sec": 0, 00:26:32.628 "rw_mbytes_per_sec": 0, 00:26:32.628 "r_mbytes_per_sec": 0, 00:26:32.628 "w_mbytes_per_sec": 0 00:26:32.628 }, 00:26:32.628 "claimed": false, 00:26:32.628 "zoned": false, 00:26:32.628 "supported_io_types": { 00:26:32.628 "read": true, 00:26:32.628 "write": true, 00:26:32.628 "unmap": false, 00:26:32.628 "flush": false, 00:26:32.628 "reset": true, 00:26:32.628 "nvme_admin": false, 00:26:32.628 "nvme_io": false, 00:26:32.628 "nvme_io_md": false, 00:26:32.628 "write_zeroes": true, 00:26:32.628 "zcopy": false, 00:26:32.628 "get_zone_info": false, 00:26:32.628 "zone_management": false, 00:26:32.628 "zone_append": false, 00:26:32.628 "compare": false, 00:26:32.628 "compare_and_write": false, 00:26:32.628 "abort": false, 00:26:32.628 "seek_hole": false, 00:26:32.628 "seek_data": false, 00:26:32.628 "copy": false, 00:26:32.628 "nvme_iov_md": false 00:26:32.628 }, 00:26:32.628 "memory_domains": [ 00:26:32.628 { 00:26:32.628 "dma_device_id": "system", 00:26:32.628 "dma_device_type": 1 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.628 "dma_device_type": 2 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "dma_device_id": "system", 00:26:32.628 "dma_device_type": 1 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.628 "dma_device_type": 2 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "dma_device_id": "system", 00:26:32.628 "dma_device_type": 1 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.628 "dma_device_type": 2 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "dma_device_id": "system", 00:26:32.628 "dma_device_type": 1 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.628 "dma_device_type": 2 00:26:32.628 } 00:26:32.628 ], 00:26:32.628 "driver_specific": { 00:26:32.628 "raid": { 00:26:32.628 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:32.628 "strip_size_kb": 0, 00:26:32.628 "state": "online", 00:26:32.628 "raid_level": "raid1", 00:26:32.628 "superblock": true, 00:26:32.628 "num_base_bdevs": 4, 00:26:32.628 "num_base_bdevs_discovered": 4, 00:26:32.628 "num_base_bdevs_operational": 4, 00:26:32.628 "base_bdevs_list": [ 00:26:32.628 { 00:26:32.628 "name": "pt1", 00:26:32.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.628 "is_configured": true, 00:26:32.628 "data_offset": 2048, 00:26:32.628 "data_size": 63488 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "name": "pt2", 00:26:32.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.628 "is_configured": true, 00:26:32.628 "data_offset": 2048, 00:26:32.628 "data_size": 63488 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "name": "pt3", 00:26:32.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.628 "is_configured": true, 00:26:32.628 "data_offset": 2048, 00:26:32.628 "data_size": 63488 00:26:32.628 }, 00:26:32.628 { 00:26:32.628 "name": "pt4", 00:26:32.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.628 "is_configured": true, 00:26:32.628 "data_offset": 2048, 00:26:32.628 "data_size": 63488 00:26:32.628 } 00:26:32.628 ] 00:26:32.628 } 00:26:32.628 } 00:26:32.628 }' 00:26:32.628 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:32.628 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:32.628 pt2 00:26:32.628 pt3 00:26:32.628 pt4' 00:26:32.628 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:32.886 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:32.886 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:32.886 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:32.886 "name": "pt1", 00:26:32.886 "aliases": [ 00:26:32.886 "00000000-0000-0000-0000-000000000001" 00:26:32.886 ], 00:26:32.886 "product_name": "passthru", 00:26:32.886 "block_size": 512, 00:26:32.886 "num_blocks": 65536, 00:26:32.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.886 "assigned_rate_limits": { 00:26:32.886 "rw_ios_per_sec": 0, 00:26:32.886 "rw_mbytes_per_sec": 0, 00:26:32.886 "r_mbytes_per_sec": 0, 00:26:32.886 "w_mbytes_per_sec": 0 00:26:32.886 }, 00:26:32.886 "claimed": true, 00:26:32.886 "claim_type": "exclusive_write", 00:26:32.886 "zoned": false, 00:26:32.886 "supported_io_types": { 00:26:32.886 "read": true, 00:26:32.886 "write": true, 00:26:32.886 "unmap": true, 00:26:32.886 "flush": true, 00:26:32.886 "reset": true, 00:26:32.886 "nvme_admin": false, 00:26:32.886 "nvme_io": false, 00:26:32.886 "nvme_io_md": false, 00:26:32.886 "write_zeroes": true, 00:26:32.886 "zcopy": true, 00:26:32.886 "get_zone_info": false, 00:26:32.886 "zone_management": false, 00:26:32.886 "zone_append": false, 00:26:32.886 "compare": false, 00:26:32.886 "compare_and_write": false, 00:26:32.886 "abort": true, 00:26:32.886 "seek_hole": false, 00:26:32.886 "seek_data": false, 00:26:32.886 "copy": true, 00:26:32.886 "nvme_iov_md": false 00:26:32.886 }, 00:26:32.886 "memory_domains": [ 00:26:32.886 { 00:26:32.886 "dma_device_id": "system", 00:26:32.886 "dma_device_type": 1 00:26:32.886 }, 00:26:32.886 { 00:26:32.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.886 "dma_device_type": 2 00:26:32.886 } 00:26:32.886 ], 00:26:32.886 "driver_specific": { 00:26:32.886 "passthru": { 00:26:32.886 "name": "pt1", 00:26:32.886 "base_bdev_name": "malloc1" 00:26:32.886 } 00:26:32.886 } 00:26:32.886 }' 00:26:32.886 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.886 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.886 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:32.886 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:33.143 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.401 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.401 "name": "pt2", 00:26:33.401 "aliases": [ 00:26:33.401 "00000000-0000-0000-0000-000000000002" 00:26:33.401 ], 00:26:33.401 "product_name": "passthru", 00:26:33.401 "block_size": 512, 00:26:33.401 "num_blocks": 65536, 00:26:33.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.401 "assigned_rate_limits": { 00:26:33.401 "rw_ios_per_sec": 0, 00:26:33.401 "rw_mbytes_per_sec": 0, 00:26:33.401 "r_mbytes_per_sec": 0, 00:26:33.401 "w_mbytes_per_sec": 0 00:26:33.401 }, 00:26:33.401 "claimed": true, 00:26:33.401 "claim_type": "exclusive_write", 00:26:33.401 "zoned": false, 00:26:33.401 "supported_io_types": { 00:26:33.401 "read": true, 00:26:33.401 "write": true, 00:26:33.401 "unmap": true, 00:26:33.401 "flush": true, 00:26:33.401 "reset": true, 00:26:33.401 "nvme_admin": false, 00:26:33.401 "nvme_io": false, 00:26:33.401 "nvme_io_md": false, 00:26:33.401 "write_zeroes": true, 00:26:33.401 "zcopy": true, 00:26:33.401 "get_zone_info": false, 00:26:33.401 "zone_management": false, 00:26:33.401 "zone_append": false, 00:26:33.401 "compare": false, 00:26:33.401 "compare_and_write": false, 00:26:33.401 "abort": true, 00:26:33.401 "seek_hole": false, 00:26:33.401 "seek_data": false, 00:26:33.401 "copy": true, 00:26:33.401 "nvme_iov_md": false 00:26:33.401 }, 00:26:33.401 "memory_domains": [ 00:26:33.401 { 00:26:33.401 "dma_device_id": "system", 00:26:33.401 "dma_device_type": 1 00:26:33.401 }, 00:26:33.401 { 00:26:33.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.401 "dma_device_type": 2 00:26:33.401 } 00:26:33.401 ], 00:26:33.401 "driver_specific": { 00:26:33.401 "passthru": { 00:26:33.401 "name": "pt2", 00:26:33.401 "base_bdev_name": "malloc2" 00:26:33.401 } 00:26:33.401 } 00:26:33.401 }' 00:26:33.401 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.401 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.401 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.401 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.659 18:53:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:33.659 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.916 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.916 "name": "pt3", 00:26:33.916 "aliases": [ 00:26:33.916 "00000000-0000-0000-0000-000000000003" 00:26:33.916 ], 00:26:33.916 "product_name": "passthru", 00:26:33.916 "block_size": 512, 00:26:33.916 "num_blocks": 65536, 00:26:33.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:33.916 "assigned_rate_limits": { 00:26:33.916 "rw_ios_per_sec": 0, 00:26:33.916 "rw_mbytes_per_sec": 0, 00:26:33.916 "r_mbytes_per_sec": 0, 00:26:33.916 "w_mbytes_per_sec": 0 00:26:33.916 }, 00:26:33.916 "claimed": true, 00:26:33.916 "claim_type": "exclusive_write", 00:26:33.916 "zoned": false, 00:26:33.916 "supported_io_types": { 00:26:33.916 "read": true, 00:26:33.916 "write": true, 00:26:33.916 "unmap": true, 00:26:33.916 "flush": true, 00:26:33.916 "reset": true, 00:26:33.916 "nvme_admin": false, 00:26:33.916 "nvme_io": false, 00:26:33.916 "nvme_io_md": false, 00:26:33.916 "write_zeroes": true, 00:26:33.916 "zcopy": true, 00:26:33.916 "get_zone_info": false, 00:26:33.916 "zone_management": false, 00:26:33.916 "zone_append": false, 00:26:33.916 "compare": false, 00:26:33.916 "compare_and_write": false, 00:26:33.916 "abort": true, 00:26:33.916 "seek_hole": false, 00:26:33.916 "seek_data": false, 00:26:33.916 "copy": true, 00:26:33.916 "nvme_iov_md": false 00:26:33.916 }, 00:26:33.916 "memory_domains": [ 00:26:33.916 { 00:26:33.916 "dma_device_id": "system", 00:26:33.916 "dma_device_type": 1 00:26:33.916 }, 00:26:33.916 { 00:26:33.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.916 "dma_device_type": 2 00:26:33.916 } 00:26:33.916 ], 00:26:33.916 "driver_specific": { 00:26:33.916 "passthru": { 00:26:33.916 "name": "pt3", 00:26:33.916 "base_bdev_name": "malloc3" 00:26:33.916 } 00:26:33.916 } 00:26:33.916 }' 00:26:33.916 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.916 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.916 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.916 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:34.174 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:34.432 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:34.432 "name": "pt4", 00:26:34.432 "aliases": [ 00:26:34.432 "00000000-0000-0000-0000-000000000004" 00:26:34.432 ], 00:26:34.432 "product_name": "passthru", 00:26:34.432 "block_size": 512, 00:26:34.432 "num_blocks": 65536, 00:26:34.432 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:34.432 "assigned_rate_limits": { 00:26:34.432 "rw_ios_per_sec": 0, 00:26:34.432 "rw_mbytes_per_sec": 0, 00:26:34.432 "r_mbytes_per_sec": 0, 00:26:34.432 "w_mbytes_per_sec": 0 00:26:34.432 }, 00:26:34.432 "claimed": true, 00:26:34.432 "claim_type": "exclusive_write", 00:26:34.432 "zoned": false, 00:26:34.432 "supported_io_types": { 00:26:34.432 "read": true, 00:26:34.432 "write": true, 00:26:34.432 "unmap": true, 00:26:34.432 "flush": true, 00:26:34.432 "reset": true, 00:26:34.432 "nvme_admin": false, 00:26:34.432 "nvme_io": false, 00:26:34.432 "nvme_io_md": false, 00:26:34.432 "write_zeroes": true, 00:26:34.432 "zcopy": true, 00:26:34.432 "get_zone_info": false, 00:26:34.432 "zone_management": false, 00:26:34.432 "zone_append": false, 00:26:34.432 "compare": false, 00:26:34.432 "compare_and_write": false, 00:26:34.432 "abort": true, 00:26:34.432 "seek_hole": false, 00:26:34.432 "seek_data": false, 00:26:34.432 "copy": true, 00:26:34.432 "nvme_iov_md": false 00:26:34.432 }, 00:26:34.432 "memory_domains": [ 00:26:34.432 { 00:26:34.432 "dma_device_id": "system", 00:26:34.432 "dma_device_type": 1 00:26:34.432 }, 00:26:34.432 { 00:26:34.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.432 "dma_device_type": 2 00:26:34.432 } 00:26:34.432 ], 00:26:34.432 "driver_specific": { 00:26:34.432 "passthru": { 00:26:34.432 "name": "pt4", 00:26:34.432 "base_bdev_name": "malloc4" 00:26:34.432 } 00:26:34.432 } 00:26:34.432 }' 00:26:34.432 18:53:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.690 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.947 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.947 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:34.947 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:35.205 [2024-07-25 18:53:35.545761] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:35.205 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=00ebe3d8-995d-47a0-895f-a08901817da6 00:26:35.205 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 00ebe3d8-995d-47a0-895f-a08901817da6 ']' 00:26:35.205 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:35.463 [2024-07-25 18:53:35.793577] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:35.463 [2024-07-25 18:53:35.793753] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:35.463 [2024-07-25 18:53:35.794027] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.463 [2024-07-25 18:53:35.794250] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.463 [2024-07-25 18:53:35.794329] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:26:35.463 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:35.463 18:53:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.722 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:35.722 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:35.722 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.722 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:35.722 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.722 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:35.980 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.980 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:36.237 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:36.237 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:36.494 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:36.494 18:53:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:36.494 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:36.752 [2024-07-25 18:53:37.261807] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:36.752 [2024-07-25 18:53:37.264249] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:36.752 [2024-07-25 18:53:37.264418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:36.752 [2024-07-25 18:53:37.264484] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:36.752 [2024-07-25 18:53:37.264622] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:36.752 [2024-07-25 18:53:37.264804] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:36.752 [2024-07-25 18:53:37.264870] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:36.752 [2024-07-25 18:53:37.265022] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:36.752 [2024-07-25 18:53:37.265112] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.752 [2024-07-25 18:53:37.265198] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:26:36.752 request: 00:26:36.752 { 00:26:36.752 "name": "raid_bdev1", 00:26:36.752 "raid_level": "raid1", 00:26:36.752 "base_bdevs": [ 00:26:36.752 "malloc1", 00:26:36.752 "malloc2", 00:26:36.752 "malloc3", 00:26:36.752 "malloc4" 00:26:36.752 ], 00:26:36.752 "superblock": false, 00:26:36.752 "method": "bdev_raid_create", 00:26:36.752 "req_id": 1 00:26:36.752 } 00:26:36.752 Got JSON-RPC error response 00:26:36.752 response: 00:26:36.752 { 00:26:36.752 "code": -17, 00:26:36.752 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:36.752 } 00:26:36.752 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:36.752 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.752 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.752 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.752 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:36.752 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.011 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:37.011 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:37.011 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:37.269 [2024-07-25 18:53:37.634448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:37.269 [2024-07-25 18:53:37.634697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.269 [2024-07-25 18:53:37.634770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:37.269 [2024-07-25 18:53:37.634885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.269 [2024-07-25 18:53:37.637671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.269 [2024-07-25 18:53:37.637842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:37.269 [2024-07-25 18:53:37.638113] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:37.269 [2024-07-25 18:53:37.638231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:37.269 pt1 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:37.269 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.270 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.529 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.529 "name": "raid_bdev1", 00:26:37.529 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:37.529 "strip_size_kb": 0, 00:26:37.529 "state": "configuring", 00:26:37.529 "raid_level": "raid1", 00:26:37.529 "superblock": true, 00:26:37.529 "num_base_bdevs": 4, 00:26:37.529 "num_base_bdevs_discovered": 1, 00:26:37.529 "num_base_bdevs_operational": 4, 00:26:37.529 "base_bdevs_list": [ 00:26:37.529 { 00:26:37.529 "name": "pt1", 00:26:37.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:37.529 "is_configured": true, 00:26:37.529 "data_offset": 2048, 00:26:37.529 "data_size": 63488 00:26:37.529 }, 00:26:37.529 { 00:26:37.529 "name": null, 00:26:37.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.529 "is_configured": false, 00:26:37.529 "data_offset": 2048, 00:26:37.529 "data_size": 63488 00:26:37.529 }, 00:26:37.529 { 00:26:37.529 "name": null, 00:26:37.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.529 "is_configured": false, 00:26:37.529 "data_offset": 2048, 00:26:37.529 "data_size": 63488 00:26:37.529 }, 00:26:37.529 { 00:26:37.529 "name": null, 00:26:37.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:37.529 "is_configured": false, 00:26:37.529 "data_offset": 2048, 00:26:37.529 "data_size": 63488 00:26:37.529 } 00:26:37.529 ] 00:26:37.529 }' 00:26:37.529 18:53:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.529 18:53:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.096 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:26:38.096 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:38.096 [2024-07-25 18:53:38.618667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:38.096 [2024-07-25 18:53:38.618772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.096 [2024-07-25 18:53:38.618840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:38.096 [2024-07-25 18:53:38.618883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.096 [2024-07-25 18:53:38.619443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.096 [2024-07-25 18:53:38.619472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:38.096 [2024-07-25 18:53:38.619594] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:38.096 [2024-07-25 18:53:38.619618] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:38.096 pt2 00:26:38.096 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:38.354 [2024-07-25 18:53:38.882756] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.354 18:53:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.613 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.613 "name": "raid_bdev1", 00:26:38.613 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:38.613 "strip_size_kb": 0, 00:26:38.613 "state": "configuring", 00:26:38.613 "raid_level": "raid1", 00:26:38.613 "superblock": true, 00:26:38.613 "num_base_bdevs": 4, 00:26:38.613 "num_base_bdevs_discovered": 1, 00:26:38.613 "num_base_bdevs_operational": 4, 00:26:38.613 "base_bdevs_list": [ 00:26:38.613 { 00:26:38.613 "name": "pt1", 00:26:38.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:38.613 "is_configured": true, 00:26:38.613 "data_offset": 2048, 00:26:38.613 "data_size": 63488 00:26:38.613 }, 00:26:38.613 { 00:26:38.613 "name": null, 00:26:38.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:38.613 "is_configured": false, 00:26:38.613 "data_offset": 2048, 00:26:38.613 "data_size": 63488 00:26:38.613 }, 00:26:38.613 { 00:26:38.613 "name": null, 00:26:38.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:38.613 "is_configured": false, 00:26:38.613 "data_offset": 2048, 00:26:38.613 "data_size": 63488 00:26:38.613 }, 00:26:38.613 { 00:26:38.613 "name": null, 00:26:38.613 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:38.613 "is_configured": false, 00:26:38.613 "data_offset": 2048, 00:26:38.613 "data_size": 63488 00:26:38.613 } 00:26:38.613 ] 00:26:38.613 }' 00:26:38.613 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.613 18:53:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.181 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:39.181 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.181 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:39.440 [2024-07-25 18:53:39.774893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:39.440 [2024-07-25 18:53:39.774986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.440 [2024-07-25 18:53:39.775028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:39.440 [2024-07-25 18:53:39.775078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.440 [2024-07-25 18:53:39.775585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.440 [2024-07-25 18:53:39.775619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:39.440 [2024-07-25 18:53:39.775732] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:39.440 [2024-07-25 18:53:39.775756] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.440 pt2 00:26:39.440 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:39.440 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.440 18:53:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:39.697 [2024-07-25 18:53:40.042973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:39.697 [2024-07-25 18:53:40.043091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.697 [2024-07-25 18:53:40.043124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:39.697 [2024-07-25 18:53:40.043178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.697 [2024-07-25 18:53:40.043674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.697 [2024-07-25 18:53:40.043717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:39.697 [2024-07-25 18:53:40.043823] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:39.697 [2024-07-25 18:53:40.043845] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:39.697 pt3 00:26:39.697 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:39.697 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.697 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:39.955 [2024-07-25 18:53:40.318990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:39.955 [2024-07-25 18:53:40.319066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.955 [2024-07-25 18:53:40.319115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:39.955 [2024-07-25 18:53:40.319161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.955 [2024-07-25 18:53:40.319669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.955 [2024-07-25 18:53:40.319712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:39.955 [2024-07-25 18:53:40.319817] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:39.955 [2024-07-25 18:53:40.319841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:39.955 [2024-07-25 18:53:40.319987] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:26:39.955 [2024-07-25 18:53:40.319996] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:39.955 [2024-07-25 18:53:40.320085] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:39.955 [2024-07-25 18:53:40.320416] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:26:39.955 [2024-07-25 18:53:40.320436] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:26:39.955 [2024-07-25 18:53:40.320559] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.955 pt4 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.955 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.955 "name": "raid_bdev1", 00:26:39.955 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:39.955 "strip_size_kb": 0, 00:26:39.955 "state": "online", 00:26:39.955 "raid_level": "raid1", 00:26:39.956 "superblock": true, 00:26:39.956 "num_base_bdevs": 4, 00:26:39.956 "num_base_bdevs_discovered": 4, 00:26:39.956 "num_base_bdevs_operational": 4, 00:26:39.956 "base_bdevs_list": [ 00:26:39.956 { 00:26:39.956 "name": "pt1", 00:26:39.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:39.956 "is_configured": true, 00:26:39.956 "data_offset": 2048, 00:26:39.956 "data_size": 63488 00:26:39.956 }, 00:26:39.956 { 00:26:39.956 "name": "pt2", 00:26:39.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:39.956 "is_configured": true, 00:26:39.956 "data_offset": 2048, 00:26:39.956 "data_size": 63488 00:26:39.956 }, 00:26:39.956 { 00:26:39.956 "name": "pt3", 00:26:39.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:39.956 "is_configured": true, 00:26:39.956 "data_offset": 2048, 00:26:39.956 "data_size": 63488 00:26:39.956 }, 00:26:39.956 { 00:26:39.956 "name": "pt4", 00:26:39.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:39.956 "is_configured": true, 00:26:39.956 "data_offset": 2048, 00:26:39.956 "data_size": 63488 00:26:39.956 } 00:26:39.956 ] 00:26:39.956 }' 00:26:39.956 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.956 18:53:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.523 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:40.523 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:40.523 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:40.523 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:40.523 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:40.523 18:53:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:40.523 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:40.524 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:40.782 [2024-07-25 18:53:41.243483] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.782 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:40.782 "name": "raid_bdev1", 00:26:40.782 "aliases": [ 00:26:40.782 "00ebe3d8-995d-47a0-895f-a08901817da6" 00:26:40.782 ], 00:26:40.782 "product_name": "Raid Volume", 00:26:40.782 "block_size": 512, 00:26:40.782 "num_blocks": 63488, 00:26:40.782 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:40.782 "assigned_rate_limits": { 00:26:40.782 "rw_ios_per_sec": 0, 00:26:40.782 "rw_mbytes_per_sec": 0, 00:26:40.782 "r_mbytes_per_sec": 0, 00:26:40.782 "w_mbytes_per_sec": 0 00:26:40.782 }, 00:26:40.782 "claimed": false, 00:26:40.782 "zoned": false, 00:26:40.782 "supported_io_types": { 00:26:40.782 "read": true, 00:26:40.782 "write": true, 00:26:40.782 "unmap": false, 00:26:40.782 "flush": false, 00:26:40.782 "reset": true, 00:26:40.782 "nvme_admin": false, 00:26:40.782 "nvme_io": false, 00:26:40.782 "nvme_io_md": false, 00:26:40.782 "write_zeroes": true, 00:26:40.782 "zcopy": false, 00:26:40.782 "get_zone_info": false, 00:26:40.782 "zone_management": false, 00:26:40.782 "zone_append": false, 00:26:40.782 "compare": false, 00:26:40.782 "compare_and_write": false, 00:26:40.782 "abort": false, 00:26:40.782 "seek_hole": false, 00:26:40.782 "seek_data": false, 00:26:40.782 "copy": false, 00:26:40.782 "nvme_iov_md": false 00:26:40.782 }, 00:26:40.782 "memory_domains": [ 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "system", 00:26:40.782 "dma_device_type": 1 00:26:40.782 }, 00:26:40.782 { 00:26:40.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.782 "dma_device_type": 2 00:26:40.782 } 00:26:40.782 ], 00:26:40.782 "driver_specific": { 00:26:40.783 "raid": { 00:26:40.783 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:40.783 "strip_size_kb": 0, 00:26:40.783 "state": "online", 00:26:40.783 "raid_level": "raid1", 00:26:40.783 "superblock": true, 00:26:40.783 "num_base_bdevs": 4, 00:26:40.783 "num_base_bdevs_discovered": 4, 00:26:40.783 "num_base_bdevs_operational": 4, 00:26:40.783 "base_bdevs_list": [ 00:26:40.783 { 00:26:40.783 "name": "pt1", 00:26:40.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:40.783 "is_configured": true, 00:26:40.783 "data_offset": 2048, 00:26:40.783 "data_size": 63488 00:26:40.783 }, 00:26:40.783 { 00:26:40.783 "name": "pt2", 00:26:40.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:40.783 "is_configured": true, 00:26:40.783 "data_offset": 2048, 00:26:40.783 "data_size": 63488 00:26:40.783 }, 00:26:40.783 { 00:26:40.783 "name": "pt3", 00:26:40.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:40.783 "is_configured": true, 00:26:40.783 "data_offset": 2048, 00:26:40.783 "data_size": 63488 00:26:40.783 }, 00:26:40.783 { 00:26:40.783 "name": "pt4", 00:26:40.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:40.783 "is_configured": true, 00:26:40.783 "data_offset": 2048, 00:26:40.783 "data_size": 63488 00:26:40.783 } 00:26:40.783 ] 00:26:40.783 } 00:26:40.783 } 00:26:40.783 }' 00:26:40.783 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:40.783 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:40.783 pt2 00:26:40.783 pt3 00:26:40.783 pt4' 00:26:40.783 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:40.783 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:40.783 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.041 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.041 "name": "pt1", 00:26:41.042 "aliases": [ 00:26:41.042 "00000000-0000-0000-0000-000000000001" 00:26:41.042 ], 00:26:41.042 "product_name": "passthru", 00:26:41.042 "block_size": 512, 00:26:41.042 "num_blocks": 65536, 00:26:41.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:41.042 "assigned_rate_limits": { 00:26:41.042 "rw_ios_per_sec": 0, 00:26:41.042 "rw_mbytes_per_sec": 0, 00:26:41.042 "r_mbytes_per_sec": 0, 00:26:41.042 "w_mbytes_per_sec": 0 00:26:41.042 }, 00:26:41.042 "claimed": true, 00:26:41.042 "claim_type": "exclusive_write", 00:26:41.042 "zoned": false, 00:26:41.042 "supported_io_types": { 00:26:41.042 "read": true, 00:26:41.042 "write": true, 00:26:41.042 "unmap": true, 00:26:41.042 "flush": true, 00:26:41.042 "reset": true, 00:26:41.042 "nvme_admin": false, 00:26:41.042 "nvme_io": false, 00:26:41.042 "nvme_io_md": false, 00:26:41.042 "write_zeroes": true, 00:26:41.042 "zcopy": true, 00:26:41.042 "get_zone_info": false, 00:26:41.042 "zone_management": false, 00:26:41.042 "zone_append": false, 00:26:41.042 "compare": false, 00:26:41.042 "compare_and_write": false, 00:26:41.042 "abort": true, 00:26:41.042 "seek_hole": false, 00:26:41.042 "seek_data": false, 00:26:41.042 "copy": true, 00:26:41.042 "nvme_iov_md": false 00:26:41.042 }, 00:26:41.042 "memory_domains": [ 00:26:41.042 { 00:26:41.042 "dma_device_id": "system", 00:26:41.042 "dma_device_type": 1 00:26:41.042 }, 00:26:41.042 { 00:26:41.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.042 "dma_device_type": 2 00:26:41.042 } 00:26:41.042 ], 00:26:41.042 "driver_specific": { 00:26:41.042 "passthru": { 00:26:41.042 "name": "pt1", 00:26:41.042 "base_bdev_name": "malloc1" 00:26:41.042 } 00:26:41.042 } 00:26:41.042 }' 00:26:41.042 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.042 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.301 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.560 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:41.560 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:41.560 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:41.560 18:53:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.818 "name": "pt2", 00:26:41.818 "aliases": [ 00:26:41.818 "00000000-0000-0000-0000-000000000002" 00:26:41.818 ], 00:26:41.818 "product_name": "passthru", 00:26:41.818 "block_size": 512, 00:26:41.818 "num_blocks": 65536, 00:26:41.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:41.818 "assigned_rate_limits": { 00:26:41.818 "rw_ios_per_sec": 0, 00:26:41.818 "rw_mbytes_per_sec": 0, 00:26:41.818 "r_mbytes_per_sec": 0, 00:26:41.818 "w_mbytes_per_sec": 0 00:26:41.818 }, 00:26:41.818 "claimed": true, 00:26:41.818 "claim_type": "exclusive_write", 00:26:41.818 "zoned": false, 00:26:41.818 "supported_io_types": { 00:26:41.818 "read": true, 00:26:41.818 "write": true, 00:26:41.818 "unmap": true, 00:26:41.818 "flush": true, 00:26:41.818 "reset": true, 00:26:41.818 "nvme_admin": false, 00:26:41.818 "nvme_io": false, 00:26:41.818 "nvme_io_md": false, 00:26:41.818 "write_zeroes": true, 00:26:41.818 "zcopy": true, 00:26:41.818 "get_zone_info": false, 00:26:41.818 "zone_management": false, 00:26:41.818 "zone_append": false, 00:26:41.818 "compare": false, 00:26:41.818 "compare_and_write": false, 00:26:41.818 "abort": true, 00:26:41.818 "seek_hole": false, 00:26:41.818 "seek_data": false, 00:26:41.818 "copy": true, 00:26:41.818 "nvme_iov_md": false 00:26:41.818 }, 00:26:41.818 "memory_domains": [ 00:26:41.818 { 00:26:41.818 "dma_device_id": "system", 00:26:41.818 "dma_device_type": 1 00:26:41.818 }, 00:26:41.818 { 00:26:41.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.818 "dma_device_type": 2 00:26:41.818 } 00:26:41.818 ], 00:26:41.818 "driver_specific": { 00:26:41.818 "passthru": { 00:26:41.818 "name": "pt2", 00:26:41.818 "base_bdev_name": "malloc2" 00:26:41.818 } 00:26:41.818 } 00:26:41.818 }' 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.818 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.076 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.076 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.076 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.076 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:42.076 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:42.334 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:42.334 "name": "pt3", 00:26:42.335 "aliases": [ 00:26:42.335 "00000000-0000-0000-0000-000000000003" 00:26:42.335 ], 00:26:42.335 "product_name": "passthru", 00:26:42.335 "block_size": 512, 00:26:42.335 "num_blocks": 65536, 00:26:42.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:42.335 "assigned_rate_limits": { 00:26:42.335 "rw_ios_per_sec": 0, 00:26:42.335 "rw_mbytes_per_sec": 0, 00:26:42.335 "r_mbytes_per_sec": 0, 00:26:42.335 "w_mbytes_per_sec": 0 00:26:42.335 }, 00:26:42.335 "claimed": true, 00:26:42.335 "claim_type": "exclusive_write", 00:26:42.335 "zoned": false, 00:26:42.335 "supported_io_types": { 00:26:42.335 "read": true, 00:26:42.335 "write": true, 00:26:42.335 "unmap": true, 00:26:42.335 "flush": true, 00:26:42.335 "reset": true, 00:26:42.335 "nvme_admin": false, 00:26:42.335 "nvme_io": false, 00:26:42.335 "nvme_io_md": false, 00:26:42.335 "write_zeroes": true, 00:26:42.335 "zcopy": true, 00:26:42.335 "get_zone_info": false, 00:26:42.335 "zone_management": false, 00:26:42.335 "zone_append": false, 00:26:42.335 "compare": false, 00:26:42.335 "compare_and_write": false, 00:26:42.335 "abort": true, 00:26:42.335 "seek_hole": false, 00:26:42.335 "seek_data": false, 00:26:42.335 "copy": true, 00:26:42.335 "nvme_iov_md": false 00:26:42.335 }, 00:26:42.335 "memory_domains": [ 00:26:42.335 { 00:26:42.335 "dma_device_id": "system", 00:26:42.335 "dma_device_type": 1 00:26:42.335 }, 00:26:42.335 { 00:26:42.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.335 "dma_device_type": 2 00:26:42.335 } 00:26:42.335 ], 00:26:42.335 "driver_specific": { 00:26:42.335 "passthru": { 00:26:42.335 "name": "pt3", 00:26:42.335 "base_bdev_name": "malloc3" 00:26:42.335 } 00:26:42.335 } 00:26:42.335 }' 00:26:42.335 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.335 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.335 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:42.335 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.335 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.594 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:42.594 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.594 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.594 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:42.594 18:53:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.594 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.594 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.594 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.594 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:42.594 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:42.852 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:42.852 "name": "pt4", 00:26:42.852 "aliases": [ 00:26:42.852 "00000000-0000-0000-0000-000000000004" 00:26:42.852 ], 00:26:42.852 "product_name": "passthru", 00:26:42.852 "block_size": 512, 00:26:42.852 "num_blocks": 65536, 00:26:42.852 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:42.852 "assigned_rate_limits": { 00:26:42.852 "rw_ios_per_sec": 0, 00:26:42.852 "rw_mbytes_per_sec": 0, 00:26:42.852 "r_mbytes_per_sec": 0, 00:26:42.852 "w_mbytes_per_sec": 0 00:26:42.852 }, 00:26:42.852 "claimed": true, 00:26:42.852 "claim_type": "exclusive_write", 00:26:42.852 "zoned": false, 00:26:42.852 "supported_io_types": { 00:26:42.852 "read": true, 00:26:42.852 "write": true, 00:26:42.852 "unmap": true, 00:26:42.852 "flush": true, 00:26:42.852 "reset": true, 00:26:42.852 "nvme_admin": false, 00:26:42.852 "nvme_io": false, 00:26:42.852 "nvme_io_md": false, 00:26:42.852 "write_zeroes": true, 00:26:42.852 "zcopy": true, 00:26:42.852 "get_zone_info": false, 00:26:42.852 "zone_management": false, 00:26:42.852 "zone_append": false, 00:26:42.852 "compare": false, 00:26:42.852 "compare_and_write": false, 00:26:42.852 "abort": true, 00:26:42.852 "seek_hole": false, 00:26:42.852 "seek_data": false, 00:26:42.852 "copy": true, 00:26:42.852 "nvme_iov_md": false 00:26:42.852 }, 00:26:42.852 "memory_domains": [ 00:26:42.852 { 00:26:42.852 "dma_device_id": "system", 00:26:42.852 "dma_device_type": 1 00:26:42.852 }, 00:26:42.852 { 00:26:42.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.852 "dma_device_type": 2 00:26:42.852 } 00:26:42.852 ], 00:26:42.852 "driver_specific": { 00:26:42.852 "passthru": { 00:26:42.852 "name": "pt4", 00:26:42.852 "base_bdev_name": "malloc4" 00:26:42.852 } 00:26:42.852 } 00:26:42.852 }' 00:26:42.852 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.852 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.110 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.368 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:43.368 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:43.368 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:43.626 [2024-07-25 18:53:43.976038] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:43.626 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 00ebe3d8-995d-47a0-895f-a08901817da6 '!=' 00ebe3d8-995d-47a0-895f-a08901817da6 ']' 00:26:43.626 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:26:43.626 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:43.627 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:43.627 18:53:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:43.885 [2024-07-25 18:53:44.243865] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.885 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.144 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.144 "name": "raid_bdev1", 00:26:44.144 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:44.144 "strip_size_kb": 0, 00:26:44.144 "state": "online", 00:26:44.144 "raid_level": "raid1", 00:26:44.144 "superblock": true, 00:26:44.144 "num_base_bdevs": 4, 00:26:44.144 "num_base_bdevs_discovered": 3, 00:26:44.144 "num_base_bdevs_operational": 3, 00:26:44.144 "base_bdevs_list": [ 00:26:44.144 { 00:26:44.144 "name": null, 00:26:44.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.144 "is_configured": false, 00:26:44.144 "data_offset": 2048, 00:26:44.144 "data_size": 63488 00:26:44.144 }, 00:26:44.144 { 00:26:44.144 "name": "pt2", 00:26:44.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:44.144 "is_configured": true, 00:26:44.144 "data_offset": 2048, 00:26:44.144 "data_size": 63488 00:26:44.144 }, 00:26:44.144 { 00:26:44.144 "name": "pt3", 00:26:44.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:44.144 "is_configured": true, 00:26:44.144 "data_offset": 2048, 00:26:44.144 "data_size": 63488 00:26:44.144 }, 00:26:44.144 { 00:26:44.144 "name": "pt4", 00:26:44.144 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:44.144 "is_configured": true, 00:26:44.144 "data_offset": 2048, 00:26:44.144 "data_size": 63488 00:26:44.144 } 00:26:44.144 ] 00:26:44.144 }' 00:26:44.144 18:53:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.144 18:53:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.712 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:44.712 [2024-07-25 18:53:45.276021] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:44.712 [2024-07-25 18:53:45.276061] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:44.712 [2024-07-25 18:53:45.276162] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:44.712 [2024-07-25 18:53:45.276244] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:44.712 [2024-07-25 18:53:45.276253] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:26:44.970 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:44.970 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.229 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:45.229 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:45.229 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:45.229 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:45.229 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:45.487 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:45.487 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:45.487 18:53:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:45.745 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:45.745 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:45.745 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:46.004 [2024-07-25 18:53:46.492218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:46.004 [2024-07-25 18:53:46.492315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.004 [2024-07-25 18:53:46.492349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:46.004 [2024-07-25 18:53:46.492392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.004 [2024-07-25 18:53:46.495107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.004 [2024-07-25 18:53:46.495152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:46.004 [2024-07-25 18:53:46.495288] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:46.004 [2024-07-25 18:53:46.495343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:46.004 pt2 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.004 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.262 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.262 "name": "raid_bdev1", 00:26:46.262 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:46.262 "strip_size_kb": 0, 00:26:46.262 "state": "configuring", 00:26:46.262 "raid_level": "raid1", 00:26:46.262 "superblock": true, 00:26:46.262 "num_base_bdevs": 4, 00:26:46.262 "num_base_bdevs_discovered": 1, 00:26:46.262 "num_base_bdevs_operational": 3, 00:26:46.262 "base_bdevs_list": [ 00:26:46.262 { 00:26:46.262 "name": null, 00:26:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.262 "is_configured": false, 00:26:46.262 "data_offset": 2048, 00:26:46.262 "data_size": 63488 00:26:46.262 }, 00:26:46.262 { 00:26:46.262 "name": "pt2", 00:26:46.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:46.262 "is_configured": true, 00:26:46.262 "data_offset": 2048, 00:26:46.262 "data_size": 63488 00:26:46.262 }, 00:26:46.262 { 00:26:46.262 "name": null, 00:26:46.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:46.262 "is_configured": false, 00:26:46.262 "data_offset": 2048, 00:26:46.262 "data_size": 63488 00:26:46.262 }, 00:26:46.262 { 00:26:46.262 "name": null, 00:26:46.262 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:46.262 "is_configured": false, 00:26:46.262 "data_offset": 2048, 00:26:46.262 "data_size": 63488 00:26:46.262 } 00:26:46.262 ] 00:26:46.262 }' 00:26:46.262 18:53:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.262 18:53:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.828 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:26:46.828 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:46.828 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:47.088 [2024-07-25 18:53:47.448247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:47.088 [2024-07-25 18:53:47.448363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.088 [2024-07-25 18:53:47.448415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:47.088 [2024-07-25 18:53:47.448458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.088 [2024-07-25 18:53:47.449011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.088 [2024-07-25 18:53:47.449053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:47.089 [2024-07-25 18:53:47.449169] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:47.089 [2024-07-25 18:53:47.449194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:47.089 pt3 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.089 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.347 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.348 "name": "raid_bdev1", 00:26:47.348 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:47.348 "strip_size_kb": 0, 00:26:47.348 "state": "configuring", 00:26:47.348 "raid_level": "raid1", 00:26:47.348 "superblock": true, 00:26:47.348 "num_base_bdevs": 4, 00:26:47.348 "num_base_bdevs_discovered": 2, 00:26:47.348 "num_base_bdevs_operational": 3, 00:26:47.348 "base_bdevs_list": [ 00:26:47.348 { 00:26:47.348 "name": null, 00:26:47.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.348 "is_configured": false, 00:26:47.348 "data_offset": 2048, 00:26:47.348 "data_size": 63488 00:26:47.348 }, 00:26:47.348 { 00:26:47.348 "name": "pt2", 00:26:47.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:47.348 "is_configured": true, 00:26:47.348 "data_offset": 2048, 00:26:47.348 "data_size": 63488 00:26:47.348 }, 00:26:47.348 { 00:26:47.348 "name": "pt3", 00:26:47.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:47.348 "is_configured": true, 00:26:47.348 "data_offset": 2048, 00:26:47.348 "data_size": 63488 00:26:47.348 }, 00:26:47.348 { 00:26:47.348 "name": null, 00:26:47.348 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:47.348 "is_configured": false, 00:26:47.348 "data_offset": 2048, 00:26:47.348 "data_size": 63488 00:26:47.348 } 00:26:47.348 ] 00:26:47.348 }' 00:26:47.348 18:53:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.348 18:53:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.914 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:26:47.914 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:47.914 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:47.914 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:47.914 [2024-07-25 18:53:48.480465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:47.914 [2024-07-25 18:53:48.480590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.914 [2024-07-25 18:53:48.480633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:47.914 [2024-07-25 18:53:48.480657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.914 [2024-07-25 18:53:48.481213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.914 [2024-07-25 18:53:48.481254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:47.914 [2024-07-25 18:53:48.481371] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:47.914 [2024-07-25 18:53:48.481396] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:47.914 [2024-07-25 18:53:48.481538] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:26:47.914 [2024-07-25 18:53:48.481554] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:47.915 [2024-07-25 18:53:48.481649] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:26:47.915 [2024-07-25 18:53:48.481986] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:26:47.915 [2024-07-25 18:53:48.482005] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:26:47.915 [2024-07-25 18:53:48.482151] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.915 pt4 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.176 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.176 "name": "raid_bdev1", 00:26:48.176 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:48.176 "strip_size_kb": 0, 00:26:48.176 "state": "online", 00:26:48.176 "raid_level": "raid1", 00:26:48.176 "superblock": true, 00:26:48.176 "num_base_bdevs": 4, 00:26:48.176 "num_base_bdevs_discovered": 3, 00:26:48.176 "num_base_bdevs_operational": 3, 00:26:48.176 "base_bdevs_list": [ 00:26:48.176 { 00:26:48.176 "name": null, 00:26:48.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.177 "is_configured": false, 00:26:48.177 "data_offset": 2048, 00:26:48.177 "data_size": 63488 00:26:48.177 }, 00:26:48.177 { 00:26:48.177 "name": "pt2", 00:26:48.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:48.177 "is_configured": true, 00:26:48.177 "data_offset": 2048, 00:26:48.177 "data_size": 63488 00:26:48.177 }, 00:26:48.177 { 00:26:48.177 "name": "pt3", 00:26:48.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:48.177 "is_configured": true, 00:26:48.177 "data_offset": 2048, 00:26:48.177 "data_size": 63488 00:26:48.177 }, 00:26:48.177 { 00:26:48.177 "name": "pt4", 00:26:48.177 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:48.177 "is_configured": true, 00:26:48.177 "data_offset": 2048, 00:26:48.177 "data_size": 63488 00:26:48.177 } 00:26:48.177 ] 00:26:48.177 }' 00:26:48.177 18:53:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.177 18:53:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.794 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:49.051 [2024-07-25 18:53:49.472623] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:49.052 [2024-07-25 18:53:49.472664] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:49.052 [2024-07-25 18:53:49.472745] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:49.052 [2024-07-25 18:53:49.472825] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:49.052 [2024-07-25 18:53:49.472834] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:26:49.052 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.052 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:49.310 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:49.310 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:49.310 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:26:49.310 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:26:49.310 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:49.310 18:53:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:49.569 [2024-07-25 18:53:50.113888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:49.569 [2024-07-25 18:53:50.113978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.569 [2024-07-25 18:53:50.114035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:49.569 [2024-07-25 18:53:50.114084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.569 [2024-07-25 18:53:50.116792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.569 [2024-07-25 18:53:50.116848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:49.569 [2024-07-25 18:53:50.116959] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:49.569 [2024-07-25 18:53:50.117004] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:49.569 [2024-07-25 18:53:50.117136] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:49.569 [2024-07-25 18:53:50.117146] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:49.569 [2024-07-25 18:53:50.117163] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:26:49.569 [2024-07-25 18:53:50.117235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:49.569 [2024-07-25 18:53:50.117348] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:49.569 pt1 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.569 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.828 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.828 "name": "raid_bdev1", 00:26:49.828 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:49.828 "strip_size_kb": 0, 00:26:49.828 "state": "configuring", 00:26:49.828 "raid_level": "raid1", 00:26:49.828 "superblock": true, 00:26:49.828 "num_base_bdevs": 4, 00:26:49.828 "num_base_bdevs_discovered": 2, 00:26:49.828 "num_base_bdevs_operational": 3, 00:26:49.828 "base_bdevs_list": [ 00:26:49.828 { 00:26:49.828 "name": null, 00:26:49.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.828 "is_configured": false, 00:26:49.828 "data_offset": 2048, 00:26:49.828 "data_size": 63488 00:26:49.828 }, 00:26:49.828 { 00:26:49.828 "name": "pt2", 00:26:49.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:49.828 "is_configured": true, 00:26:49.828 "data_offset": 2048, 00:26:49.828 "data_size": 63488 00:26:49.828 }, 00:26:49.828 { 00:26:49.828 "name": "pt3", 00:26:49.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:49.828 "is_configured": true, 00:26:49.828 "data_offset": 2048, 00:26:49.828 "data_size": 63488 00:26:49.828 }, 00:26:49.828 { 00:26:49.828 "name": null, 00:26:49.828 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:49.828 "is_configured": false, 00:26:49.828 "data_offset": 2048, 00:26:49.828 "data_size": 63488 00:26:49.828 } 00:26:49.828 ] 00:26:49.828 }' 00:26:49.828 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.088 18:53:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.346 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:50.346 18:53:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:50.604 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:26:50.604 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:50.864 [2024-07-25 18:53:51.230159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:50.864 [2024-07-25 18:53:51.230281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.864 [2024-07-25 18:53:51.230319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:50.864 [2024-07-25 18:53:51.230371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.864 [2024-07-25 18:53:51.230936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.864 [2024-07-25 18:53:51.230981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:50.864 [2024-07-25 18:53:51.231097] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:50.864 [2024-07-25 18:53:51.231123] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:50.864 [2024-07-25 18:53:51.231258] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:26:50.864 [2024-07-25 18:53:51.231274] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:50.864 [2024-07-25 18:53:51.231367] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:26:50.864 [2024-07-25 18:53:51.231695] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:26:50.864 [2024-07-25 18:53:51.231713] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:26:50.864 [2024-07-25 18:53:51.231861] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:50.864 pt4 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.864 "name": "raid_bdev1", 00:26:50.864 "uuid": "00ebe3d8-995d-47a0-895f-a08901817da6", 00:26:50.864 "strip_size_kb": 0, 00:26:50.864 "state": "online", 00:26:50.864 "raid_level": "raid1", 00:26:50.864 "superblock": true, 00:26:50.864 "num_base_bdevs": 4, 00:26:50.864 "num_base_bdevs_discovered": 3, 00:26:50.864 "num_base_bdevs_operational": 3, 00:26:50.864 "base_bdevs_list": [ 00:26:50.864 { 00:26:50.864 "name": null, 00:26:50.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.864 "is_configured": false, 00:26:50.864 "data_offset": 2048, 00:26:50.864 "data_size": 63488 00:26:50.864 }, 00:26:50.864 { 00:26:50.864 "name": "pt2", 00:26:50.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:50.864 "is_configured": true, 00:26:50.864 "data_offset": 2048, 00:26:50.864 "data_size": 63488 00:26:50.864 }, 00:26:50.864 { 00:26:50.864 "name": "pt3", 00:26:50.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:50.864 "is_configured": true, 00:26:50.864 "data_offset": 2048, 00:26:50.864 "data_size": 63488 00:26:50.864 }, 00:26:50.864 { 00:26:50.864 "name": "pt4", 00:26:50.864 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:50.864 "is_configured": true, 00:26:50.864 "data_offset": 2048, 00:26:50.864 "data_size": 63488 00:26:50.864 } 00:26:50.864 ] 00:26:50.864 }' 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.864 18:53:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.430 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:51.430 18:53:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:51.688 18:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:51.688 18:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:51.688 18:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:51.946 [2024-07-25 18:53:52.414721] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 00ebe3d8-995d-47a0-895f-a08901817da6 '!=' 00ebe3d8-995d-47a0-895f-a08901817da6 ']' 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 142112 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 142112 ']' 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 142112 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 142112 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 142112' 00:26:51.946 killing process with pid 142112 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 142112 00:26:51.946 [2024-07-25 18:53:52.470817] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:51.946 [2024-07-25 18:53:52.470904] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:51.946 [2024-07-25 18:53:52.470996] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:51.946 [2024-07-25 18:53:52.471005] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:26:51.946 18:53:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 142112 00:26:52.513 [2024-07-25 18:53:52.821924] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:53.445 ************************************ 00:26:53.445 END TEST raid_superblock_test 00:26:53.445 ************************************ 00:26:53.445 18:53:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:26:53.445 00:26:53.445 real 0m24.909s 00:26:53.445 user 0m44.265s 00:26:53.445 sys 0m4.316s 00:26:53.445 18:53:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:53.445 18:53:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.703 18:53:54 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:53.703 18:53:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:53.703 18:53:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:53.703 18:53:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:53.703 ************************************ 00:26:53.703 START TEST raid_read_error_test 00:26:53.703 ************************************ 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev4 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:53.703 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.FkRKTXMHR1 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=142949 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 142949 /var/tmp/spdk-raid.sock 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 142949 ']' 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.704 18:53:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.704 [2024-07-25 18:53:54.181424] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:53.704 [2024-07-25 18:53:54.181652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142949 ] 00:26:53.962 [2024-07-25 18:53:54.366331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.220 [2024-07-25 18:53:54.611552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.479 [2024-07-25 18:53:54.878463] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:54.738 18:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.738 18:53:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:54.738 18:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:54.738 18:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:54.996 BaseBdev1_malloc 00:26:54.996 18:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:55.255 true 00:26:55.255 18:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:55.255 [2024-07-25 18:53:55.779107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:55.255 [2024-07-25 18:53:55.779249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.255 [2024-07-25 18:53:55.779290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:26:55.255 [2024-07-25 18:53:55.779320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.255 [2024-07-25 18:53:55.782125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.255 [2024-07-25 18:53:55.782174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:55.255 BaseBdev1 00:26:55.255 18:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:55.255 18:53:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:55.513 BaseBdev2_malloc 00:26:55.513 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:55.772 true 00:26:55.772 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:56.030 [2024-07-25 18:53:56.385731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:56.030 [2024-07-25 18:53:56.385906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.030 [2024-07-25 18:53:56.385953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:56.030 [2024-07-25 18:53:56.385976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.030 [2024-07-25 18:53:56.388586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.030 [2024-07-25 18:53:56.388638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:56.030 BaseBdev2 00:26:56.030 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:56.030 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:56.289 BaseBdev3_malloc 00:26:56.289 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:56.289 true 00:26:56.289 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:56.548 [2024-07-25 18:53:56.988351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:56.548 [2024-07-25 18:53:56.988458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.548 [2024-07-25 18:53:56.988516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:56.548 [2024-07-25 18:53:56.988544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.548 [2024-07-25 18:53:56.991241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.548 [2024-07-25 18:53:56.991293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:56.548 BaseBdev3 00:26:56.548 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:56.548 18:53:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:56.806 BaseBdev4_malloc 00:26:56.806 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:57.064 true 00:26:57.065 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:57.065 [2024-07-25 18:53:57.571791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:57.065 [2024-07-25 18:53:57.571905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.065 [2024-07-25 18:53:57.571986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:57.065 [2024-07-25 18:53:57.572015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.065 [2024-07-25 18:53:57.574685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.065 [2024-07-25 18:53:57.574756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:57.065 BaseBdev4 00:26:57.065 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:57.323 [2024-07-25 18:53:57.759915] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:57.323 [2024-07-25 18:53:57.762302] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:57.323 [2024-07-25 18:53:57.762391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:57.323 [2024-07-25 18:53:57.762447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:57.323 [2024-07-25 18:53:57.762665] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:26:57.323 [2024-07-25 18:53:57.762674] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:57.323 [2024-07-25 18:53:57.762850] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:57.323 [2024-07-25 18:53:57.763279] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:26:57.323 [2024-07-25 18:53:57.763289] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:26:57.323 [2024-07-25 18:53:57.763470] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:57.323 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:57.323 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:57.323 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:57.323 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:57.323 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:57.323 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:57.324 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.324 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.324 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.324 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.324 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.324 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.582 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.582 "name": "raid_bdev1", 00:26:57.582 "uuid": "4dfe1e46-4ac8-4b45-a04d-ffc117962f6e", 00:26:57.582 "strip_size_kb": 0, 00:26:57.582 "state": "online", 00:26:57.582 "raid_level": "raid1", 00:26:57.582 "superblock": true, 00:26:57.582 "num_base_bdevs": 4, 00:26:57.582 "num_base_bdevs_discovered": 4, 00:26:57.582 "num_base_bdevs_operational": 4, 00:26:57.582 "base_bdevs_list": [ 00:26:57.582 { 00:26:57.582 "name": "BaseBdev1", 00:26:57.582 "uuid": "18c16c67-2897-5296-9d69-773781f109b9", 00:26:57.582 "is_configured": true, 00:26:57.582 "data_offset": 2048, 00:26:57.582 "data_size": 63488 00:26:57.582 }, 00:26:57.582 { 00:26:57.582 "name": "BaseBdev2", 00:26:57.582 "uuid": "1dfc590b-9554-5a55-859f-c858e595289d", 00:26:57.582 "is_configured": true, 00:26:57.582 "data_offset": 2048, 00:26:57.582 "data_size": 63488 00:26:57.582 }, 00:26:57.582 { 00:26:57.582 "name": "BaseBdev3", 00:26:57.582 "uuid": "24256113-b5a3-546b-bc8c-d4156b59e50f", 00:26:57.582 "is_configured": true, 00:26:57.582 "data_offset": 2048, 00:26:57.582 "data_size": 63488 00:26:57.582 }, 00:26:57.582 { 00:26:57.582 "name": "BaseBdev4", 00:26:57.582 "uuid": "c542253a-03e0-51ba-9b6a-f6248b38ba93", 00:26:57.582 "is_configured": true, 00:26:57.582 "data_offset": 2048, 00:26:57.582 "data_size": 63488 00:26:57.582 } 00:26:57.582 ] 00:26:57.582 }' 00:26:57.582 18:53:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.582 18:53:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.148 18:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:26:58.148 18:53:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:58.148 [2024-07-25 18:53:58.669395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:59.084 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.342 18:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.601 18:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.601 "name": "raid_bdev1", 00:26:59.601 "uuid": "4dfe1e46-4ac8-4b45-a04d-ffc117962f6e", 00:26:59.601 "strip_size_kb": 0, 00:26:59.601 "state": "online", 00:26:59.601 "raid_level": "raid1", 00:26:59.601 "superblock": true, 00:26:59.601 "num_base_bdevs": 4, 00:26:59.601 "num_base_bdevs_discovered": 4, 00:26:59.601 "num_base_bdevs_operational": 4, 00:26:59.601 "base_bdevs_list": [ 00:26:59.601 { 00:26:59.601 "name": "BaseBdev1", 00:26:59.601 "uuid": "18c16c67-2897-5296-9d69-773781f109b9", 00:26:59.601 "is_configured": true, 00:26:59.601 "data_offset": 2048, 00:26:59.601 "data_size": 63488 00:26:59.601 }, 00:26:59.601 { 00:26:59.601 "name": "BaseBdev2", 00:26:59.601 "uuid": "1dfc590b-9554-5a55-859f-c858e595289d", 00:26:59.601 "is_configured": true, 00:26:59.601 "data_offset": 2048, 00:26:59.601 "data_size": 63488 00:26:59.601 }, 00:26:59.601 { 00:26:59.601 "name": "BaseBdev3", 00:26:59.601 "uuid": "24256113-b5a3-546b-bc8c-d4156b59e50f", 00:26:59.601 "is_configured": true, 00:26:59.601 "data_offset": 2048, 00:26:59.601 "data_size": 63488 00:26:59.601 }, 00:26:59.601 { 00:26:59.601 "name": "BaseBdev4", 00:26:59.601 "uuid": "c542253a-03e0-51ba-9b6a-f6248b38ba93", 00:26:59.601 "is_configured": true, 00:26:59.601 "data_offset": 2048, 00:26:59.601 "data_size": 63488 00:26:59.601 } 00:26:59.601 ] 00:26:59.601 }' 00:26:59.601 18:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.601 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.167 18:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:00.430 [2024-07-25 18:54:00.907653] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:00.430 [2024-07-25 18:54:00.907706] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:00.430 [2024-07-25 18:54:00.910407] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:00.430 [2024-07-25 18:54:00.910469] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.430 [2024-07-25 18:54:00.910587] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:00.430 [2024-07-25 18:54:00.910597] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:27:00.430 0 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 142949 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 142949 ']' 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 142949 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 142949 00:27:00.431 killing process with pid 142949 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 142949' 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 142949 00:27:00.431 18:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 142949 00:27:00.431 [2024-07-25 18:54:00.957104] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:00.997 [2024-07-25 18:54:01.315060] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.FkRKTXMHR1 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:27:02.371 ************************************ 00:27:02.371 END TEST raid_read_error_test 00:27:02.371 ************************************ 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:02.371 00:27:02.371 real 0m8.763s 00:27:02.371 user 0m12.636s 00:27:02.371 sys 0m1.405s 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:02.371 18:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.371 18:54:02 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:27:02.371 18:54:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:02.371 18:54:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.371 18:54:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:02.371 ************************************ 00:27:02.371 START TEST raid_write_error_test 00:27:02.371 ************************************ 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev1 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev2 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev3 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # echo BaseBdev4 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Uhlyz7pQHm 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=143169 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 143169 /var/tmp/spdk-raid.sock 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 143169 ']' 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:02.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.371 18:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.629 [2024-07-25 18:54:03.016073] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:02.629 [2024-07-25 18:54:03.016302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143169 ] 00:27:02.629 [2024-07-25 18:54:03.202783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.887 [2024-07-25 18:54:03.435974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.145 [2024-07-25 18:54:03.699086] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:03.402 18:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.402 18:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:03.402 18:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:03.402 18:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:03.660 BaseBdev1_malloc 00:27:03.660 18:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:03.919 true 00:27:03.919 18:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:03.919 [2024-07-25 18:54:04.439329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:03.919 [2024-07-25 18:54:04.439455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:03.919 [2024-07-25 18:54:04.439514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:03.919 [2024-07-25 18:54:04.439538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:03.919 [2024-07-25 18:54:04.442282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:03.919 [2024-07-25 18:54:04.442336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:03.919 BaseBdev1 00:27:03.919 18:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:03.919 18:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:04.177 BaseBdev2_malloc 00:27:04.177 18:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:04.435 true 00:27:04.435 18:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:04.692 [2024-07-25 18:54:05.038772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:04.692 [2024-07-25 18:54:05.038914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.692 [2024-07-25 18:54:05.038959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:04.692 [2024-07-25 18:54:05.038986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.692 [2024-07-25 18:54:05.041648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.692 [2024-07-25 18:54:05.041697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:04.692 BaseBdev2 00:27:04.692 18:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:04.692 18:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:04.950 BaseBdev3_malloc 00:27:04.950 18:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:05.208 true 00:27:05.208 18:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:05.208 [2024-07-25 18:54:05.711812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:05.208 [2024-07-25 18:54:05.711921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.208 [2024-07-25 18:54:05.711987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:05.208 [2024-07-25 18:54:05.712015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.208 [2024-07-25 18:54:05.715256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.208 [2024-07-25 18:54:05.715317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:05.208 BaseBdev3 00:27:05.208 18:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:05.208 18:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:05.465 BaseBdev4_malloc 00:27:05.465 18:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:05.723 true 00:27:05.723 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:05.981 [2024-07-25 18:54:06.302929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:05.981 [2024-07-25 18:54:06.303034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.981 [2024-07-25 18:54:06.303114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:05.981 [2024-07-25 18:54:06.303144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.981 [2024-07-25 18:54:06.305757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.981 [2024-07-25 18:54:06.305836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:05.981 BaseBdev4 00:27:05.981 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:05.981 [2024-07-25 18:54:06.479008] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:05.981 [2024-07-25 18:54:06.481332] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:05.981 [2024-07-25 18:54:06.481421] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:05.981 [2024-07-25 18:54:06.481475] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:05.981 [2024-07-25 18:54:06.481716] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013800 00:27:05.981 [2024-07-25 18:54:06.481726] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:05.981 [2024-07-25 18:54:06.481872] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:05.981 [2024-07-25 18:54:06.482287] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013800 00:27:05.981 [2024-07-25 18:54:06.482308] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013800 00:27:05.981 [2024-07-25 18:54:06.482489] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.981 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:05.981 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.982 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.240 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.240 "name": "raid_bdev1", 00:27:06.240 "uuid": "61746864-ca73-4234-9d36-f6b6549d8fcd", 00:27:06.240 "strip_size_kb": 0, 00:27:06.240 "state": "online", 00:27:06.240 "raid_level": "raid1", 00:27:06.240 "superblock": true, 00:27:06.240 "num_base_bdevs": 4, 00:27:06.240 "num_base_bdevs_discovered": 4, 00:27:06.240 "num_base_bdevs_operational": 4, 00:27:06.240 "base_bdevs_list": [ 00:27:06.240 { 00:27:06.240 "name": "BaseBdev1", 00:27:06.240 "uuid": "74a74cd8-83ec-5380-820d-7cd782e76afc", 00:27:06.240 "is_configured": true, 00:27:06.240 "data_offset": 2048, 00:27:06.240 "data_size": 63488 00:27:06.240 }, 00:27:06.240 { 00:27:06.240 "name": "BaseBdev2", 00:27:06.240 "uuid": "ea3bba5c-927f-5bbd-be96-b2d2ca8eb62f", 00:27:06.240 "is_configured": true, 00:27:06.240 "data_offset": 2048, 00:27:06.240 "data_size": 63488 00:27:06.240 }, 00:27:06.240 { 00:27:06.240 "name": "BaseBdev3", 00:27:06.240 "uuid": "628462ff-4e66-5ee4-b01a-9cd83b8b7030", 00:27:06.240 "is_configured": true, 00:27:06.240 "data_offset": 2048, 00:27:06.240 "data_size": 63488 00:27:06.240 }, 00:27:06.240 { 00:27:06.240 "name": "BaseBdev4", 00:27:06.240 "uuid": "b5ffac0c-a6dc-5a6a-a575-ab4d3cc4e036", 00:27:06.240 "is_configured": true, 00:27:06.240 "data_offset": 2048, 00:27:06.240 "data_size": 63488 00:27:06.240 } 00:27:06.240 ] 00:27:06.240 }' 00:27:06.240 18:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.240 18:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.806 18:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:27:06.806 18:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:06.806 [2024-07-25 18:54:07.348721] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:07.740 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:07.999 [2024-07-25 18:54:08.500646] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:07.999 [2024-07-25 18:54:08.500781] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:07.999 [2024-07-25 18:54:08.501044] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.999 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.257 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.257 "name": "raid_bdev1", 00:27:08.257 "uuid": "61746864-ca73-4234-9d36-f6b6549d8fcd", 00:27:08.257 "strip_size_kb": 0, 00:27:08.257 "state": "online", 00:27:08.257 "raid_level": "raid1", 00:27:08.257 "superblock": true, 00:27:08.257 "num_base_bdevs": 4, 00:27:08.257 "num_base_bdevs_discovered": 3, 00:27:08.257 "num_base_bdevs_operational": 3, 00:27:08.257 "base_bdevs_list": [ 00:27:08.257 { 00:27:08.257 "name": null, 00:27:08.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.257 "is_configured": false, 00:27:08.257 "data_offset": 2048, 00:27:08.257 "data_size": 63488 00:27:08.257 }, 00:27:08.257 { 00:27:08.257 "name": "BaseBdev2", 00:27:08.257 "uuid": "ea3bba5c-927f-5bbd-be96-b2d2ca8eb62f", 00:27:08.257 "is_configured": true, 00:27:08.257 "data_offset": 2048, 00:27:08.257 "data_size": 63488 00:27:08.257 }, 00:27:08.257 { 00:27:08.257 "name": "BaseBdev3", 00:27:08.257 "uuid": "628462ff-4e66-5ee4-b01a-9cd83b8b7030", 00:27:08.257 "is_configured": true, 00:27:08.257 "data_offset": 2048, 00:27:08.257 "data_size": 63488 00:27:08.257 }, 00:27:08.257 { 00:27:08.257 "name": "BaseBdev4", 00:27:08.257 "uuid": "b5ffac0c-a6dc-5a6a-a575-ab4d3cc4e036", 00:27:08.257 "is_configured": true, 00:27:08.257 "data_offset": 2048, 00:27:08.257 "data_size": 63488 00:27:08.257 } 00:27:08.257 ] 00:27:08.257 }' 00:27:08.257 18:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.257 18:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.823 18:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:09.082 [2024-07-25 18:54:09.621556] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:09.082 [2024-07-25 18:54:09.621611] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:09.082 [2024-07-25 18:54:09.624143] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:09.082 [2024-07-25 18:54:09.624209] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.082 [2024-07-25 18:54:09.624306] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:09.082 [2024-07-25 18:54:09.624316] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state offline 00:27:09.082 0 00:27:09.082 18:54:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 143169 00:27:09.082 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 143169 ']' 00:27:09.082 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 143169 00:27:09.082 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:27:09.082 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.082 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143169 00:27:09.343 killing process with pid 143169 00:27:09.343 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.343 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.343 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143169' 00:27:09.343 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 143169 00:27:09.343 18:54:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 143169 00:27:09.343 [2024-07-25 18:54:09.676249] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:09.621 [2024-07-25 18:54:10.028628] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:11.016 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Uhlyz7pQHm 00:27:11.016 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:27:11.016 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:27:11.016 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:27:11.016 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:27:11.016 ************************************ 00:27:11.016 END TEST raid_write_error_test 00:27:11.017 ************************************ 00:27:11.017 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:11.017 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:11.017 18:54:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:11.017 00:27:11.017 real 0m8.644s 00:27:11.017 user 0m12.447s 00:27:11.017 sys 0m1.357s 00:27:11.017 18:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:11.017 18:54:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.276 18:54:11 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:27:11.276 18:54:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:27:11.276 18:54:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:27:11.276 18:54:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:11.276 18:54:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:11.276 18:54:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:11.276 ************************************ 00:27:11.276 START TEST raid_rebuild_test 00:27:11.276 ************************************ 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=143367 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 143367 /var/tmp/spdk-raid.sock 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 143367 ']' 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.276 18:54:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.276 [2024-07-25 18:54:11.725413] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:11.276 [2024-07-25 18:54:11.726370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143367 ] 00:27:11.276 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:11.276 Zero copy mechanism will not be used. 00:27:11.535 [2024-07-25 18:54:11.916413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.793 [2024-07-25 18:54:12.164080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.052 [2024-07-25 18:54:12.431870] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:12.310 18:54:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.310 18:54:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:27:12.310 18:54:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:12.310 18:54:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:12.310 BaseBdev1_malloc 00:27:12.310 18:54:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:12.569 [2024-07-25 18:54:13.044110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:12.569 [2024-07-25 18:54:13.044237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.569 [2024-07-25 18:54:13.044290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:27:12.569 [2024-07-25 18:54:13.044319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.569 [2024-07-25 18:54:13.047110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.569 [2024-07-25 18:54:13.047166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:12.569 BaseBdev1 00:27:12.569 18:54:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:12.569 18:54:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:12.828 BaseBdev2_malloc 00:27:12.828 18:54:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:13.085 [2024-07-25 18:54:13.516576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:13.085 [2024-07-25 18:54:13.516721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.085 [2024-07-25 18:54:13.516762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:13.085 [2024-07-25 18:54:13.516785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.085 [2024-07-25 18:54:13.519435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.085 [2024-07-25 18:54:13.519483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:13.085 BaseBdev2 00:27:13.085 18:54:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:13.343 spare_malloc 00:27:13.343 18:54:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:13.343 spare_delay 00:27:13.343 18:54:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:13.600 [2024-07-25 18:54:14.072147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:13.600 [2024-07-25 18:54:14.072278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.600 [2024-07-25 18:54:14.072322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:13.600 [2024-07-25 18:54:14.072350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.600 [2024-07-25 18:54:14.074957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.600 [2024-07-25 18:54:14.075011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:13.600 spare 00:27:13.600 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:13.859 [2024-07-25 18:54:14.244276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:13.859 [2024-07-25 18:54:14.246623] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:13.859 [2024-07-25 18:54:14.246732] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:27:13.859 [2024-07-25 18:54:14.246742] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:13.859 [2024-07-25 18:54:14.246892] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:27:13.859 [2024-07-25 18:54:14.247269] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:27:13.859 [2024-07-25 18:54:14.247286] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:27:13.859 [2024-07-25 18:54:14.247455] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.859 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.117 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.117 "name": "raid_bdev1", 00:27:14.117 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:14.117 "strip_size_kb": 0, 00:27:14.117 "state": "online", 00:27:14.117 "raid_level": "raid1", 00:27:14.117 "superblock": false, 00:27:14.117 "num_base_bdevs": 2, 00:27:14.117 "num_base_bdevs_discovered": 2, 00:27:14.117 "num_base_bdevs_operational": 2, 00:27:14.117 "base_bdevs_list": [ 00:27:14.117 { 00:27:14.117 "name": "BaseBdev1", 00:27:14.117 "uuid": "9528d336-bc2f-5adf-8aca-ed07b84850f1", 00:27:14.117 "is_configured": true, 00:27:14.117 "data_offset": 0, 00:27:14.117 "data_size": 65536 00:27:14.117 }, 00:27:14.117 { 00:27:14.117 "name": "BaseBdev2", 00:27:14.117 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:14.117 "is_configured": true, 00:27:14.117 "data_offset": 0, 00:27:14.117 "data_size": 65536 00:27:14.117 } 00:27:14.117 ] 00:27:14.117 }' 00:27:14.117 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.117 18:54:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.684 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:14.684 18:54:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:14.943 [2024-07-25 18:54:15.268767] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:14.943 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:27:14.943 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.943 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:15.202 [2024-07-25 18:54:15.708783] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:15.202 /dev/nbd0 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:15.202 1+0 records in 00:27:15.202 1+0 records out 00:27:15.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263975 s, 15.5 MB/s 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:15.202 18:54:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:19.390 65536+0 records in 00:27:19.390 65536+0 records out 00:27:19.390 33554432 bytes (34 MB, 32 MiB) copied, 4.14674 s, 8.1 MB/s 00:27:19.390 18:54:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:19.391 18:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:19.391 18:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:19.391 18:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:19.391 18:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:19.391 18:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.391 18:54:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:19.649 [2024-07-25 18:54:20.085203] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.649 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:19.907 [2024-07-25 18:54:20.264942] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.907 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.166 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.166 "name": "raid_bdev1", 00:27:20.166 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:20.166 "strip_size_kb": 0, 00:27:20.166 "state": "online", 00:27:20.166 "raid_level": "raid1", 00:27:20.166 "superblock": false, 00:27:20.166 "num_base_bdevs": 2, 00:27:20.166 "num_base_bdevs_discovered": 1, 00:27:20.166 "num_base_bdevs_operational": 1, 00:27:20.166 "base_bdevs_list": [ 00:27:20.166 { 00:27:20.166 "name": null, 00:27:20.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.167 "is_configured": false, 00:27:20.167 "data_offset": 0, 00:27:20.167 "data_size": 65536 00:27:20.167 }, 00:27:20.167 { 00:27:20.167 "name": "BaseBdev2", 00:27:20.167 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:20.167 "is_configured": true, 00:27:20.167 "data_offset": 0, 00:27:20.167 "data_size": 65536 00:27:20.167 } 00:27:20.167 ] 00:27:20.167 }' 00:27:20.167 18:54:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.167 18:54:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.734 18:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:20.734 [2024-07-25 18:54:21.261382] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:20.734 [2024-07-25 18:54:21.280361] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09960 00:27:20.734 [2024-07-25 18:54:21.282622] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:20.734 18:54:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:22.111 "name": "raid_bdev1", 00:27:22.111 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:22.111 "strip_size_kb": 0, 00:27:22.111 "state": "online", 00:27:22.111 "raid_level": "raid1", 00:27:22.111 "superblock": false, 00:27:22.111 "num_base_bdevs": 2, 00:27:22.111 "num_base_bdevs_discovered": 2, 00:27:22.111 "num_base_bdevs_operational": 2, 00:27:22.111 "process": { 00:27:22.111 "type": "rebuild", 00:27:22.111 "target": "spare", 00:27:22.111 "progress": { 00:27:22.111 "blocks": 24576, 00:27:22.111 "percent": 37 00:27:22.111 } 00:27:22.111 }, 00:27:22.111 "base_bdevs_list": [ 00:27:22.111 { 00:27:22.111 "name": "spare", 00:27:22.111 "uuid": "8abaaf3c-4ea5-5806-8888-356b8c860323", 00:27:22.111 "is_configured": true, 00:27:22.111 "data_offset": 0, 00:27:22.111 "data_size": 65536 00:27:22.111 }, 00:27:22.111 { 00:27:22.111 "name": "BaseBdev2", 00:27:22.111 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:22.111 "is_configured": true, 00:27:22.111 "data_offset": 0, 00:27:22.111 "data_size": 65536 00:27:22.111 } 00:27:22.111 ] 00:27:22.111 }' 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.111 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:22.370 [2024-07-25 18:54:22.800626] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.370 [2024-07-25 18:54:22.893853] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:22.370 [2024-07-25 18:54:22.893927] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.370 [2024-07-25 18:54:22.893959] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.370 [2024-07-25 18:54:22.893967] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.370 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.631 18:54:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.631 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.631 "name": "raid_bdev1", 00:27:22.631 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:22.631 "strip_size_kb": 0, 00:27:22.631 "state": "online", 00:27:22.631 "raid_level": "raid1", 00:27:22.631 "superblock": false, 00:27:22.631 "num_base_bdevs": 2, 00:27:22.631 "num_base_bdevs_discovered": 1, 00:27:22.631 "num_base_bdevs_operational": 1, 00:27:22.631 "base_bdevs_list": [ 00:27:22.631 { 00:27:22.631 "name": null, 00:27:22.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.631 "is_configured": false, 00:27:22.631 "data_offset": 0, 00:27:22.631 "data_size": 65536 00:27:22.631 }, 00:27:22.631 { 00:27:22.631 "name": "BaseBdev2", 00:27:22.631 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:22.631 "is_configured": true, 00:27:22.631 "data_offset": 0, 00:27:22.631 "data_size": 65536 00:27:22.631 } 00:27:22.631 ] 00:27:22.631 }' 00:27:22.632 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.632 18:54:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.198 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:23.198 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.198 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:23.198 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:23.198 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.198 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.198 18:54:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.762 18:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.762 "name": "raid_bdev1", 00:27:23.762 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:23.762 "strip_size_kb": 0, 00:27:23.762 "state": "online", 00:27:23.762 "raid_level": "raid1", 00:27:23.762 "superblock": false, 00:27:23.762 "num_base_bdevs": 2, 00:27:23.762 "num_base_bdevs_discovered": 1, 00:27:23.762 "num_base_bdevs_operational": 1, 00:27:23.762 "base_bdevs_list": [ 00:27:23.762 { 00:27:23.762 "name": null, 00:27:23.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.762 "is_configured": false, 00:27:23.762 "data_offset": 0, 00:27:23.762 "data_size": 65536 00:27:23.762 }, 00:27:23.762 { 00:27:23.762 "name": "BaseBdev2", 00:27:23.762 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:23.762 "is_configured": true, 00:27:23.762 "data_offset": 0, 00:27:23.762 "data_size": 65536 00:27:23.762 } 00:27:23.762 ] 00:27:23.762 }' 00:27:23.762 18:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.762 18:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:23.762 18:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.762 18:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:23.762 18:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:23.762 [2024-07-25 18:54:24.316149] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:23.762 [2024-07-25 18:54:24.335189] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:27:23.762 [2024-07-25 18:54:24.337516] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:24.021 18:54:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:24.957 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:24.957 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:24.957 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:24.957 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:24.957 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:24.957 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.957 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.215 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.215 "name": "raid_bdev1", 00:27:25.215 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:25.215 "strip_size_kb": 0, 00:27:25.215 "state": "online", 00:27:25.215 "raid_level": "raid1", 00:27:25.215 "superblock": false, 00:27:25.216 "num_base_bdevs": 2, 00:27:25.216 "num_base_bdevs_discovered": 2, 00:27:25.216 "num_base_bdevs_operational": 2, 00:27:25.216 "process": { 00:27:25.216 "type": "rebuild", 00:27:25.216 "target": "spare", 00:27:25.216 "progress": { 00:27:25.216 "blocks": 24576, 00:27:25.216 "percent": 37 00:27:25.216 } 00:27:25.216 }, 00:27:25.216 "base_bdevs_list": [ 00:27:25.216 { 00:27:25.216 "name": "spare", 00:27:25.216 "uuid": "8abaaf3c-4ea5-5806-8888-356b8c860323", 00:27:25.216 "is_configured": true, 00:27:25.216 "data_offset": 0, 00:27:25.216 "data_size": 65536 00:27:25.216 }, 00:27:25.216 { 00:27:25.216 "name": "BaseBdev2", 00:27:25.216 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:25.216 "is_configured": true, 00:27:25.216 "data_offset": 0, 00:27:25.216 "data_size": 65536 00:27:25.216 } 00:27:25.216 ] 00:27:25.216 }' 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=802 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.216 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.475 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.475 "name": "raid_bdev1", 00:27:25.475 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:25.475 "strip_size_kb": 0, 00:27:25.475 "state": "online", 00:27:25.475 "raid_level": "raid1", 00:27:25.475 "superblock": false, 00:27:25.475 "num_base_bdevs": 2, 00:27:25.475 "num_base_bdevs_discovered": 2, 00:27:25.475 "num_base_bdevs_operational": 2, 00:27:25.476 "process": { 00:27:25.476 "type": "rebuild", 00:27:25.476 "target": "spare", 00:27:25.476 "progress": { 00:27:25.476 "blocks": 32768, 00:27:25.476 "percent": 50 00:27:25.476 } 00:27:25.476 }, 00:27:25.476 "base_bdevs_list": [ 00:27:25.476 { 00:27:25.476 "name": "spare", 00:27:25.476 "uuid": "8abaaf3c-4ea5-5806-8888-356b8c860323", 00:27:25.476 "is_configured": true, 00:27:25.476 "data_offset": 0, 00:27:25.476 "data_size": 65536 00:27:25.476 }, 00:27:25.476 { 00:27:25.476 "name": "BaseBdev2", 00:27:25.476 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:25.476 "is_configured": true, 00:27:25.476 "data_offset": 0, 00:27:25.476 "data_size": 65536 00:27:25.476 } 00:27:25.476 ] 00:27:25.476 }' 00:27:25.476 18:54:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.476 18:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.476 18:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.733 18:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.733 18:54:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.667 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.925 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.925 "name": "raid_bdev1", 00:27:26.925 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:26.925 "strip_size_kb": 0, 00:27:26.925 "state": "online", 00:27:26.925 "raid_level": "raid1", 00:27:26.925 "superblock": false, 00:27:26.925 "num_base_bdevs": 2, 00:27:26.925 "num_base_bdevs_discovered": 2, 00:27:26.925 "num_base_bdevs_operational": 2, 00:27:26.925 "process": { 00:27:26.925 "type": "rebuild", 00:27:26.925 "target": "spare", 00:27:26.925 "progress": { 00:27:26.925 "blocks": 59392, 00:27:26.925 "percent": 90 00:27:26.925 } 00:27:26.925 }, 00:27:26.925 "base_bdevs_list": [ 00:27:26.925 { 00:27:26.925 "name": "spare", 00:27:26.925 "uuid": "8abaaf3c-4ea5-5806-8888-356b8c860323", 00:27:26.925 "is_configured": true, 00:27:26.925 "data_offset": 0, 00:27:26.925 "data_size": 65536 00:27:26.925 }, 00:27:26.925 { 00:27:26.925 "name": "BaseBdev2", 00:27:26.926 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:26.926 "is_configured": true, 00:27:26.926 "data_offset": 0, 00:27:26.926 "data_size": 65536 00:27:26.926 } 00:27:26.926 ] 00:27:26.926 }' 00:27:26.926 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.926 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.926 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.926 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.926 18:54:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:27.183 [2024-07-25 18:54:27.560654] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:27.184 [2024-07-25 18:54:27.560723] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:27.184 [2024-07-25 18:54:27.560812] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.117 "name": "raid_bdev1", 00:27:28.117 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:28.117 "strip_size_kb": 0, 00:27:28.117 "state": "online", 00:27:28.117 "raid_level": "raid1", 00:27:28.117 "superblock": false, 00:27:28.117 "num_base_bdevs": 2, 00:27:28.117 "num_base_bdevs_discovered": 2, 00:27:28.117 "num_base_bdevs_operational": 2, 00:27:28.117 "base_bdevs_list": [ 00:27:28.117 { 00:27:28.117 "name": "spare", 00:27:28.117 "uuid": "8abaaf3c-4ea5-5806-8888-356b8c860323", 00:27:28.117 "is_configured": true, 00:27:28.117 "data_offset": 0, 00:27:28.117 "data_size": 65536 00:27:28.117 }, 00:27:28.117 { 00:27:28.117 "name": "BaseBdev2", 00:27:28.117 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:28.117 "is_configured": true, 00:27:28.117 "data_offset": 0, 00:27:28.117 "data_size": 65536 00:27:28.117 } 00:27:28.117 ] 00:27:28.117 }' 00:27:28.117 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.375 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.634 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.634 "name": "raid_bdev1", 00:27:28.634 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:28.634 "strip_size_kb": 0, 00:27:28.634 "state": "online", 00:27:28.634 "raid_level": "raid1", 00:27:28.634 "superblock": false, 00:27:28.634 "num_base_bdevs": 2, 00:27:28.634 "num_base_bdevs_discovered": 2, 00:27:28.634 "num_base_bdevs_operational": 2, 00:27:28.634 "base_bdevs_list": [ 00:27:28.634 { 00:27:28.634 "name": "spare", 00:27:28.634 "uuid": "8abaaf3c-4ea5-5806-8888-356b8c860323", 00:27:28.634 "is_configured": true, 00:27:28.634 "data_offset": 0, 00:27:28.634 "data_size": 65536 00:27:28.634 }, 00:27:28.634 { 00:27:28.634 "name": "BaseBdev2", 00:27:28.634 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:28.634 "is_configured": true, 00:27:28.634 "data_offset": 0, 00:27:28.634 "data_size": 65536 00:27:28.634 } 00:27:28.634 ] 00:27:28.634 }' 00:27:28.634 18:54:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.634 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.892 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:28.892 "name": "raid_bdev1", 00:27:28.892 "uuid": "4ddb4c13-143d-4aa3-a21d-f16105525e1b", 00:27:28.892 "strip_size_kb": 0, 00:27:28.892 "state": "online", 00:27:28.892 "raid_level": "raid1", 00:27:28.892 "superblock": false, 00:27:28.892 "num_base_bdevs": 2, 00:27:28.892 "num_base_bdevs_discovered": 2, 00:27:28.892 "num_base_bdevs_operational": 2, 00:27:28.892 "base_bdevs_list": [ 00:27:28.892 { 00:27:28.892 "name": "spare", 00:27:28.892 "uuid": "8abaaf3c-4ea5-5806-8888-356b8c860323", 00:27:28.892 "is_configured": true, 00:27:28.892 "data_offset": 0, 00:27:28.892 "data_size": 65536 00:27:28.892 }, 00:27:28.892 { 00:27:28.892 "name": "BaseBdev2", 00:27:28.892 "uuid": "639eb18c-cb93-5085-a6fc-98376be50615", 00:27:28.892 "is_configured": true, 00:27:28.892 "data_offset": 0, 00:27:28.892 "data_size": 65536 00:27:28.892 } 00:27:28.892 ] 00:27:28.892 }' 00:27:28.892 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:28.892 18:54:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.457 18:54:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:29.715 [2024-07-25 18:54:30.112486] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:29.715 [2024-07-25 18:54:30.112524] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:29.715 [2024-07-25 18:54:30.112612] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:29.715 [2024-07-25 18:54:30.112686] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:29.715 [2024-07-25 18:54:30.112695] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:27:29.715 18:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.715 18:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:29.974 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:30.250 /dev/nbd0 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:30.250 1+0 records in 00:27:30.250 1+0 records out 00:27:30.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877332 s, 4.7 MB/s 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:30.250 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:30.520 /dev/nbd1 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:30.520 1+0 records in 00:27:30.520 1+0 records out 00:27:30.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688952 s, 5.9 MB/s 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:30.520 18:54:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:30.521 18:54:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:30.521 18:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:30.521 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:30.521 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:30.521 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:30.521 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:30.521 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:30.521 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:31.086 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 143367 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 143367 ']' 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 143367 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.343 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143367 00:27:31.343 killing process with pid 143367 00:27:31.343 Received shutdown signal, test time was about 60.000000 seconds 00:27:31.343 00:27:31.343 Latency(us) 00:27:31.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.343 =================================================================================================================== 00:27:31.343 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:31.344 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:31.344 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:31.344 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143367' 00:27:31.344 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 143367 00:27:31.344 18:54:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 143367 00:27:31.344 [2024-07-25 18:54:31.691151] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:31.601 [2024-07-25 18:54:32.011564] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:27:32.973 00:27:32.973 real 0m21.830s 00:27:32.973 user 0m29.182s 00:27:32.973 sys 0m4.342s 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.973 ************************************ 00:27:32.973 END TEST raid_rebuild_test 00:27:32.973 ************************************ 00:27:32.973 18:54:33 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:27:32.973 18:54:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:32.973 18:54:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:32.973 18:54:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:32.973 ************************************ 00:27:32.973 START TEST raid_rebuild_test_sb 00:27:32.973 ************************************ 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:32.973 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=143907 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 143907 /var/tmp/spdk-raid.sock 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 143907 ']' 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.231 18:54:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:33.232 [2024-07-25 18:54:33.648029] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:33.232 [2024-07-25 18:54:33.648495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143907 ] 00:27:33.232 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:33.232 Zero copy mechanism will not be used. 00:27:33.490 [2024-07-25 18:54:33.834980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.747 [2024-07-25 18:54:34.073320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.004 [2024-07-25 18:54:34.348630] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:34.004 18:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:34.004 18:54:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:27:34.004 18:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:34.004 18:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:34.261 BaseBdev1_malloc 00:27:34.261 18:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:34.518 [2024-07-25 18:54:34.942915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:34.518 [2024-07-25 18:54:34.943192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.518 [2024-07-25 18:54:34.943276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:27:34.518 [2024-07-25 18:54:34.943379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.518 [2024-07-25 18:54:34.946099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.518 [2024-07-25 18:54:34.946272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:34.518 BaseBdev1 00:27:34.518 18:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:34.518 18:54:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:34.776 BaseBdev2_malloc 00:27:34.776 18:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:35.034 [2024-07-25 18:54:35.430695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:35.034 [2024-07-25 18:54:35.431035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.034 [2024-07-25 18:54:35.431111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:35.034 [2024-07-25 18:54:35.431218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.034 [2024-07-25 18:54:35.433881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.034 [2024-07-25 18:54:35.434057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:35.034 BaseBdev2 00:27:35.034 18:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:35.292 spare_malloc 00:27:35.292 18:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:35.550 spare_delay 00:27:35.550 18:54:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:35.550 [2024-07-25 18:54:36.074417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:35.550 [2024-07-25 18:54:36.074725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.550 [2024-07-25 18:54:36.074817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:35.550 [2024-07-25 18:54:36.074918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.550 [2024-07-25 18:54:36.077677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.550 [2024-07-25 18:54:36.077867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:35.550 spare 00:27:35.550 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:35.808 [2024-07-25 18:54:36.330597] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:35.808 [2024-07-25 18:54:36.332782] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:35.808 [2024-07-25 18:54:36.333063] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:27:35.808 [2024-07-25 18:54:36.333167] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:35.808 [2024-07-25 18:54:36.333327] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:27:35.808 [2024-07-25 18:54:36.333785] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:27:35.808 [2024-07-25 18:54:36.333892] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:27:35.808 [2024-07-25 18:54:36.334132] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.808 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.066 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.066 "name": "raid_bdev1", 00:27:36.066 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:36.066 "strip_size_kb": 0, 00:27:36.066 "state": "online", 00:27:36.066 "raid_level": "raid1", 00:27:36.066 "superblock": true, 00:27:36.066 "num_base_bdevs": 2, 00:27:36.066 "num_base_bdevs_discovered": 2, 00:27:36.066 "num_base_bdevs_operational": 2, 00:27:36.066 "base_bdevs_list": [ 00:27:36.066 { 00:27:36.066 "name": "BaseBdev1", 00:27:36.066 "uuid": "d88a2fe5-434e-537b-810e-2b10802a6772", 00:27:36.066 "is_configured": true, 00:27:36.066 "data_offset": 2048, 00:27:36.066 "data_size": 63488 00:27:36.066 }, 00:27:36.066 { 00:27:36.066 "name": "BaseBdev2", 00:27:36.066 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:36.066 "is_configured": true, 00:27:36.066 "data_offset": 2048, 00:27:36.066 "data_size": 63488 00:27:36.066 } 00:27:36.066 ] 00:27:36.066 }' 00:27:36.066 18:54:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.066 18:54:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.632 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:36.632 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:36.890 [2024-07-25 18:54:37.347012] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.890 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:27:36.890 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.890 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:37.149 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:37.408 [2024-07-25 18:54:37.838918] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:37.408 /dev/nbd0 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.408 1+0 records in 00:27:37.408 1+0 records out 00:27:37.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648139 s, 6.3 MB/s 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:37.408 18:54:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:27:42.676 63488+0 records in 00:27:42.676 63488+0 records out 00:27:42.676 32505856 bytes (33 MB, 31 MiB) copied, 4.71124 s, 6.9 MB/s 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:42.676 [2024-07-25 18:54:42.827719] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:42.676 18:54:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:42.676 [2024-07-25 18:54:43.011447] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.676 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.935 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.935 "name": "raid_bdev1", 00:27:42.935 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:42.935 "strip_size_kb": 0, 00:27:42.935 "state": "online", 00:27:42.935 "raid_level": "raid1", 00:27:42.935 "superblock": true, 00:27:42.935 "num_base_bdevs": 2, 00:27:42.935 "num_base_bdevs_discovered": 1, 00:27:42.935 "num_base_bdevs_operational": 1, 00:27:42.935 "base_bdevs_list": [ 00:27:42.935 { 00:27:42.935 "name": null, 00:27:42.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.935 "is_configured": false, 00:27:42.935 "data_offset": 2048, 00:27:42.935 "data_size": 63488 00:27:42.935 }, 00:27:42.935 { 00:27:42.935 "name": "BaseBdev2", 00:27:42.935 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:42.935 "is_configured": true, 00:27:42.935 "data_offset": 2048, 00:27:42.935 "data_size": 63488 00:27:42.935 } 00:27:42.935 ] 00:27:42.935 }' 00:27:42.935 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.935 18:54:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.500 18:54:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:43.757 [2024-07-25 18:54:44.095661] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:43.757 [2024-07-25 18:54:44.114041] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca30f0 00:27:43.757 [2024-07-25 18:54:44.116457] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:43.757 18:54:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:44.691 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:44.691 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.691 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:44.691 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:44.691 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.691 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.691 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.949 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.949 "name": "raid_bdev1", 00:27:44.949 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:44.949 "strip_size_kb": 0, 00:27:44.949 "state": "online", 00:27:44.949 "raid_level": "raid1", 00:27:44.949 "superblock": true, 00:27:44.949 "num_base_bdevs": 2, 00:27:44.949 "num_base_bdevs_discovered": 2, 00:27:44.949 "num_base_bdevs_operational": 2, 00:27:44.949 "process": { 00:27:44.949 "type": "rebuild", 00:27:44.949 "target": "spare", 00:27:44.949 "progress": { 00:27:44.949 "blocks": 24576, 00:27:44.949 "percent": 38 00:27:44.949 } 00:27:44.949 }, 00:27:44.949 "base_bdevs_list": [ 00:27:44.949 { 00:27:44.949 "name": "spare", 00:27:44.949 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:44.949 "is_configured": true, 00:27:44.949 "data_offset": 2048, 00:27:44.949 "data_size": 63488 00:27:44.949 }, 00:27:44.949 { 00:27:44.949 "name": "BaseBdev2", 00:27:44.949 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:44.949 "is_configured": true, 00:27:44.949 "data_offset": 2048, 00:27:44.949 "data_size": 63488 00:27:44.949 } 00:27:44.949 ] 00:27:44.949 }' 00:27:44.949 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.949 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.949 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.949 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.949 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:45.207 [2024-07-25 18:54:45.710065] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:45.207 [2024-07-25 18:54:45.727993] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:45.207 [2024-07-25 18:54:45.728195] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.207 [2024-07-25 18:54:45.728244] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:45.207 [2024-07-25 18:54:45.728323] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.207 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.466 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.466 "name": "raid_bdev1", 00:27:45.466 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:45.466 "strip_size_kb": 0, 00:27:45.466 "state": "online", 00:27:45.466 "raid_level": "raid1", 00:27:45.466 "superblock": true, 00:27:45.466 "num_base_bdevs": 2, 00:27:45.466 "num_base_bdevs_discovered": 1, 00:27:45.466 "num_base_bdevs_operational": 1, 00:27:45.466 "base_bdevs_list": [ 00:27:45.466 { 00:27:45.466 "name": null, 00:27:45.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.466 "is_configured": false, 00:27:45.466 "data_offset": 2048, 00:27:45.466 "data_size": 63488 00:27:45.466 }, 00:27:45.466 { 00:27:45.466 "name": "BaseBdev2", 00:27:45.466 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:45.466 "is_configured": true, 00:27:45.466 "data_offset": 2048, 00:27:45.466 "data_size": 63488 00:27:45.466 } 00:27:45.466 ] 00:27:45.466 }' 00:27:45.466 18:54:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.466 18:54:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.032 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:46.032 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:46.032 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:46.032 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:46.032 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:46.032 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.032 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.291 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.291 "name": "raid_bdev1", 00:27:46.291 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:46.291 "strip_size_kb": 0, 00:27:46.291 "state": "online", 00:27:46.291 "raid_level": "raid1", 00:27:46.291 "superblock": true, 00:27:46.291 "num_base_bdevs": 2, 00:27:46.291 "num_base_bdevs_discovered": 1, 00:27:46.291 "num_base_bdevs_operational": 1, 00:27:46.291 "base_bdevs_list": [ 00:27:46.291 { 00:27:46.291 "name": null, 00:27:46.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.291 "is_configured": false, 00:27:46.291 "data_offset": 2048, 00:27:46.291 "data_size": 63488 00:27:46.291 }, 00:27:46.291 { 00:27:46.291 "name": "BaseBdev2", 00:27:46.291 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:46.291 "is_configured": true, 00:27:46.291 "data_offset": 2048, 00:27:46.291 "data_size": 63488 00:27:46.291 } 00:27:46.291 ] 00:27:46.291 }' 00:27:46.291 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.291 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:46.291 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.549 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:46.549 18:54:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:46.549 [2024-07-25 18:54:47.072225] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:46.549 [2024-07-25 18:54:47.091808] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:27:46.549 [2024-07-25 18:54:47.094240] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:46.549 18:54:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:47.924 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.924 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.924 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.924 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.924 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.924 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.924 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.925 "name": "raid_bdev1", 00:27:47.925 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:47.925 "strip_size_kb": 0, 00:27:47.925 "state": "online", 00:27:47.925 "raid_level": "raid1", 00:27:47.925 "superblock": true, 00:27:47.925 "num_base_bdevs": 2, 00:27:47.925 "num_base_bdevs_discovered": 2, 00:27:47.925 "num_base_bdevs_operational": 2, 00:27:47.925 "process": { 00:27:47.925 "type": "rebuild", 00:27:47.925 "target": "spare", 00:27:47.925 "progress": { 00:27:47.925 "blocks": 24576, 00:27:47.925 "percent": 38 00:27:47.925 } 00:27:47.925 }, 00:27:47.925 "base_bdevs_list": [ 00:27:47.925 { 00:27:47.925 "name": "spare", 00:27:47.925 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:47.925 "is_configured": true, 00:27:47.925 "data_offset": 2048, 00:27:47.925 "data_size": 63488 00:27:47.925 }, 00:27:47.925 { 00:27:47.925 "name": "BaseBdev2", 00:27:47.925 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:47.925 "is_configured": true, 00:27:47.925 "data_offset": 2048, 00:27:47.925 "data_size": 63488 00:27:47.925 } 00:27:47.925 ] 00:27:47.925 }' 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:47.925 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=825 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.925 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.183 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.183 "name": "raid_bdev1", 00:27:48.183 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:48.183 "strip_size_kb": 0, 00:27:48.183 "state": "online", 00:27:48.183 "raid_level": "raid1", 00:27:48.183 "superblock": true, 00:27:48.183 "num_base_bdevs": 2, 00:27:48.183 "num_base_bdevs_discovered": 2, 00:27:48.183 "num_base_bdevs_operational": 2, 00:27:48.183 "process": { 00:27:48.183 "type": "rebuild", 00:27:48.183 "target": "spare", 00:27:48.183 "progress": { 00:27:48.183 "blocks": 30720, 00:27:48.183 "percent": 48 00:27:48.183 } 00:27:48.183 }, 00:27:48.183 "base_bdevs_list": [ 00:27:48.183 { 00:27:48.183 "name": "spare", 00:27:48.183 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:48.183 "is_configured": true, 00:27:48.183 "data_offset": 2048, 00:27:48.183 "data_size": 63488 00:27:48.183 }, 00:27:48.183 { 00:27:48.183 "name": "BaseBdev2", 00:27:48.183 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:48.183 "is_configured": true, 00:27:48.183 "data_offset": 2048, 00:27:48.183 "data_size": 63488 00:27:48.183 } 00:27:48.183 ] 00:27:48.183 }' 00:27:48.183 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.183 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:48.183 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.183 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:48.183 18:54:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.557 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.557 "name": "raid_bdev1", 00:27:49.557 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:49.557 "strip_size_kb": 0, 00:27:49.557 "state": "online", 00:27:49.557 "raid_level": "raid1", 00:27:49.557 "superblock": true, 00:27:49.557 "num_base_bdevs": 2, 00:27:49.557 "num_base_bdevs_discovered": 2, 00:27:49.557 "num_base_bdevs_operational": 2, 00:27:49.557 "process": { 00:27:49.557 "type": "rebuild", 00:27:49.557 "target": "spare", 00:27:49.557 "progress": { 00:27:49.557 "blocks": 57344, 00:27:49.557 "percent": 90 00:27:49.557 } 00:27:49.557 }, 00:27:49.557 "base_bdevs_list": [ 00:27:49.557 { 00:27:49.557 "name": "spare", 00:27:49.557 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:49.557 "is_configured": true, 00:27:49.557 "data_offset": 2048, 00:27:49.557 "data_size": 63488 00:27:49.557 }, 00:27:49.557 { 00:27:49.557 "name": "BaseBdev2", 00:27:49.557 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:49.557 "is_configured": true, 00:27:49.558 "data_offset": 2048, 00:27:49.558 "data_size": 63488 00:27:49.558 } 00:27:49.558 ] 00:27:49.558 }' 00:27:49.558 18:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.558 18:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.558 18:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.558 18:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.558 18:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:49.814 [2024-07-25 18:54:50.216809] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:49.814 [2024-07-25 18:54:50.217000] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:49.814 [2024-07-25 18:54:50.217298] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.806 "name": "raid_bdev1", 00:27:50.806 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:50.806 "strip_size_kb": 0, 00:27:50.806 "state": "online", 00:27:50.806 "raid_level": "raid1", 00:27:50.806 "superblock": true, 00:27:50.806 "num_base_bdevs": 2, 00:27:50.806 "num_base_bdevs_discovered": 2, 00:27:50.806 "num_base_bdevs_operational": 2, 00:27:50.806 "base_bdevs_list": [ 00:27:50.806 { 00:27:50.806 "name": "spare", 00:27:50.806 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:50.806 "is_configured": true, 00:27:50.806 "data_offset": 2048, 00:27:50.806 "data_size": 63488 00:27:50.806 }, 00:27:50.806 { 00:27:50.806 "name": "BaseBdev2", 00:27:50.806 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:50.806 "is_configured": true, 00:27:50.806 "data_offset": 2048, 00:27:50.806 "data_size": 63488 00:27:50.806 } 00:27:50.806 ] 00:27:50.806 }' 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:50.806 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.065 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:51.323 "name": "raid_bdev1", 00:27:51.323 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:51.323 "strip_size_kb": 0, 00:27:51.323 "state": "online", 00:27:51.323 "raid_level": "raid1", 00:27:51.323 "superblock": true, 00:27:51.323 "num_base_bdevs": 2, 00:27:51.323 "num_base_bdevs_discovered": 2, 00:27:51.323 "num_base_bdevs_operational": 2, 00:27:51.323 "base_bdevs_list": [ 00:27:51.323 { 00:27:51.323 "name": "spare", 00:27:51.323 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:51.323 "is_configured": true, 00:27:51.323 "data_offset": 2048, 00:27:51.323 "data_size": 63488 00:27:51.323 }, 00:27:51.323 { 00:27:51.323 "name": "BaseBdev2", 00:27:51.323 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:51.323 "is_configured": true, 00:27:51.323 "data_offset": 2048, 00:27:51.323 "data_size": 63488 00:27:51.323 } 00:27:51.323 ] 00:27:51.323 }' 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.323 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.581 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.581 "name": "raid_bdev1", 00:27:51.581 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:51.581 "strip_size_kb": 0, 00:27:51.581 "state": "online", 00:27:51.581 "raid_level": "raid1", 00:27:51.581 "superblock": true, 00:27:51.581 "num_base_bdevs": 2, 00:27:51.581 "num_base_bdevs_discovered": 2, 00:27:51.581 "num_base_bdevs_operational": 2, 00:27:51.581 "base_bdevs_list": [ 00:27:51.581 { 00:27:51.581 "name": "spare", 00:27:51.581 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:51.581 "is_configured": true, 00:27:51.581 "data_offset": 2048, 00:27:51.581 "data_size": 63488 00:27:51.581 }, 00:27:51.581 { 00:27:51.581 "name": "BaseBdev2", 00:27:51.581 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:51.581 "is_configured": true, 00:27:51.581 "data_offset": 2048, 00:27:51.581 "data_size": 63488 00:27:51.581 } 00:27:51.581 ] 00:27:51.581 }' 00:27:51.581 18:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.581 18:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.147 18:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:52.404 [2024-07-25 18:54:52.784281] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:52.405 [2024-07-25 18:54:52.784545] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:52.405 [2024-07-25 18:54:52.784790] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:52.405 [2024-07-25 18:54:52.784956] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:52.405 [2024-07-25 18:54:52.785070] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:27:52.405 18:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:27:52.405 18:54:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:52.662 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:52.920 /dev/nbd0 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.920 1+0 records in 00:27:52.920 1+0 records out 00:27:52.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039354 s, 10.4 MB/s 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:52.920 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:53.178 /dev/nbd1 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:53.178 1+0 records in 00:27:53.178 1+0 records out 00:27:53.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391929 s, 10.5 MB/s 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:53.178 18:54:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:53.743 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:54.000 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:54.000 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:54.000 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:54.000 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:54.000 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:54.258 [2024-07-25 18:54:54.724486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:54.258 [2024-07-25 18:54:54.724785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.258 [2024-07-25 18:54:54.724883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:54.258 [2024-07-25 18:54:54.724990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.258 [2024-07-25 18:54:54.727835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.258 [2024-07-25 18:54:54.728008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:54.258 [2024-07-25 18:54:54.728277] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:54.258 [2024-07-25 18:54:54.728438] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:54.258 [2024-07-25 18:54:54.728715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:54.258 spare 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.258 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.258 [2024-07-25 18:54:54.828958] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:27:54.258 [2024-07-25 18:54:54.829194] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:54.258 [2024-07-25 18:54:54.829473] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:27:54.258 [2024-07-25 18:54:54.830046] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:27:54.258 [2024-07-25 18:54:54.830150] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:27:54.258 [2024-07-25 18:54:54.830405] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.516 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:54.516 "name": "raid_bdev1", 00:27:54.516 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:54.516 "strip_size_kb": 0, 00:27:54.516 "state": "online", 00:27:54.516 "raid_level": "raid1", 00:27:54.516 "superblock": true, 00:27:54.516 "num_base_bdevs": 2, 00:27:54.516 "num_base_bdevs_discovered": 2, 00:27:54.516 "num_base_bdevs_operational": 2, 00:27:54.516 "base_bdevs_list": [ 00:27:54.516 { 00:27:54.516 "name": "spare", 00:27:54.516 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:54.516 "is_configured": true, 00:27:54.516 "data_offset": 2048, 00:27:54.516 "data_size": 63488 00:27:54.516 }, 00:27:54.516 { 00:27:54.516 "name": "BaseBdev2", 00:27:54.516 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:54.516 "is_configured": true, 00:27:54.516 "data_offset": 2048, 00:27:54.516 "data_size": 63488 00:27:54.516 } 00:27:54.516 ] 00:27:54.516 }' 00:27:54.516 18:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:54.516 18:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:55.084 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:55.084 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:55.084 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:55.084 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:55.084 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:55.084 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.084 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.342 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:55.342 "name": "raid_bdev1", 00:27:55.342 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:55.342 "strip_size_kb": 0, 00:27:55.342 "state": "online", 00:27:55.342 "raid_level": "raid1", 00:27:55.342 "superblock": true, 00:27:55.342 "num_base_bdevs": 2, 00:27:55.342 "num_base_bdevs_discovered": 2, 00:27:55.342 "num_base_bdevs_operational": 2, 00:27:55.342 "base_bdevs_list": [ 00:27:55.342 { 00:27:55.342 "name": "spare", 00:27:55.342 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:55.342 "is_configured": true, 00:27:55.342 "data_offset": 2048, 00:27:55.343 "data_size": 63488 00:27:55.343 }, 00:27:55.343 { 00:27:55.343 "name": "BaseBdev2", 00:27:55.343 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:55.343 "is_configured": true, 00:27:55.343 "data_offset": 2048, 00:27:55.343 "data_size": 63488 00:27:55.343 } 00:27:55.343 ] 00:27:55.343 }' 00:27:55.343 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:55.343 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:55.343 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:55.343 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:55.343 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.343 18:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:55.601 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:55.601 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:55.860 [2024-07-25 18:54:56.229152] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.860 "name": "raid_bdev1", 00:27:55.860 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:55.860 "strip_size_kb": 0, 00:27:55.860 "state": "online", 00:27:55.860 "raid_level": "raid1", 00:27:55.860 "superblock": true, 00:27:55.860 "num_base_bdevs": 2, 00:27:55.860 "num_base_bdevs_discovered": 1, 00:27:55.860 "num_base_bdevs_operational": 1, 00:27:55.860 "base_bdevs_list": [ 00:27:55.860 { 00:27:55.860 "name": null, 00:27:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.860 "is_configured": false, 00:27:55.860 "data_offset": 2048, 00:27:55.860 "data_size": 63488 00:27:55.860 }, 00:27:55.860 { 00:27:55.860 "name": "BaseBdev2", 00:27:55.860 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:55.860 "is_configured": true, 00:27:55.860 "data_offset": 2048, 00:27:55.860 "data_size": 63488 00:27:55.860 } 00:27:55.860 ] 00:27:55.860 }' 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.860 18:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 18:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:56.687 [2024-07-25 18:54:57.121333] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:56.687 [2024-07-25 18:54:57.121734] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:56.687 [2024-07-25 18:54:57.121873] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:56.687 [2024-07-25 18:54:57.121967] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:56.687 [2024-07-25 18:54:57.141425] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:27:56.687 [2024-07-25 18:54:57.143851] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:56.687 18:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:57.625 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:57.625 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:57.625 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:57.625 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:57.625 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:57.625 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.625 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.884 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.884 "name": "raid_bdev1", 00:27:57.884 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:57.884 "strip_size_kb": 0, 00:27:57.884 "state": "online", 00:27:57.884 "raid_level": "raid1", 00:27:57.884 "superblock": true, 00:27:57.884 "num_base_bdevs": 2, 00:27:57.884 "num_base_bdevs_discovered": 2, 00:27:57.884 "num_base_bdevs_operational": 2, 00:27:57.884 "process": { 00:27:57.884 "type": "rebuild", 00:27:57.884 "target": "spare", 00:27:57.884 "progress": { 00:27:57.884 "blocks": 24576, 00:27:57.884 "percent": 38 00:27:57.884 } 00:27:57.884 }, 00:27:57.884 "base_bdevs_list": [ 00:27:57.884 { 00:27:57.884 "name": "spare", 00:27:57.884 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:27:57.884 "is_configured": true, 00:27:57.884 "data_offset": 2048, 00:27:57.884 "data_size": 63488 00:27:57.884 }, 00:27:57.884 { 00:27:57.884 "name": "BaseBdev2", 00:27:57.884 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:57.884 "is_configured": true, 00:27:57.884 "data_offset": 2048, 00:27:57.884 "data_size": 63488 00:27:57.884 } 00:27:57.884 ] 00:27:57.884 }' 00:27:57.884 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:57.884 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:57.884 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:57.884 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.884 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:58.143 [2024-07-25 18:54:58.617468] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.143 [2024-07-25 18:54:58.655661] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:58.143 [2024-07-25 18:54:58.655887] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.143 [2024-07-25 18:54:58.655936] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.143 [2024-07-25 18:54:58.656011] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.143 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.402 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.402 "name": "raid_bdev1", 00:27:58.402 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:27:58.402 "strip_size_kb": 0, 00:27:58.402 "state": "online", 00:27:58.402 "raid_level": "raid1", 00:27:58.402 "superblock": true, 00:27:58.403 "num_base_bdevs": 2, 00:27:58.403 "num_base_bdevs_discovered": 1, 00:27:58.403 "num_base_bdevs_operational": 1, 00:27:58.403 "base_bdevs_list": [ 00:27:58.403 { 00:27:58.403 "name": null, 00:27:58.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.403 "is_configured": false, 00:27:58.403 "data_offset": 2048, 00:27:58.403 "data_size": 63488 00:27:58.403 }, 00:27:58.403 { 00:27:58.403 "name": "BaseBdev2", 00:27:58.403 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:27:58.403 "is_configured": true, 00:27:58.403 "data_offset": 2048, 00:27:58.403 "data_size": 63488 00:27:58.403 } 00:27:58.403 ] 00:27:58.403 }' 00:27:58.403 18:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.403 18:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:58.970 18:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:59.229 [2024-07-25 18:54:59.632083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:59.229 [2024-07-25 18:54:59.632340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:59.229 [2024-07-25 18:54:59.632410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:59.229 [2024-07-25 18:54:59.632514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:59.229 [2024-07-25 18:54:59.633206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:59.229 [2024-07-25 18:54:59.633362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:59.229 [2024-07-25 18:54:59.633604] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:59.229 [2024-07-25 18:54:59.633704] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:59.229 [2024-07-25 18:54:59.633812] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:59.229 [2024-07-25 18:54:59.633945] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:59.229 [2024-07-25 18:54:59.653418] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:27:59.229 spare 00:27:59.229 [2024-07-25 18:54:59.655812] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:59.229 18:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:00.166 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:00.166 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.166 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:00.166 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:00.166 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.166 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.166 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.425 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:00.425 "name": "raid_bdev1", 00:28:00.425 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:28:00.425 "strip_size_kb": 0, 00:28:00.425 "state": "online", 00:28:00.425 "raid_level": "raid1", 00:28:00.425 "superblock": true, 00:28:00.425 "num_base_bdevs": 2, 00:28:00.425 "num_base_bdevs_discovered": 2, 00:28:00.425 "num_base_bdevs_operational": 2, 00:28:00.425 "process": { 00:28:00.425 "type": "rebuild", 00:28:00.425 "target": "spare", 00:28:00.425 "progress": { 00:28:00.425 "blocks": 24576, 00:28:00.425 "percent": 38 00:28:00.425 } 00:28:00.425 }, 00:28:00.425 "base_bdevs_list": [ 00:28:00.425 { 00:28:00.425 "name": "spare", 00:28:00.425 "uuid": "63cc054f-d64b-5fbd-97c5-162ecda68825", 00:28:00.425 "is_configured": true, 00:28:00.425 "data_offset": 2048, 00:28:00.425 "data_size": 63488 00:28:00.425 }, 00:28:00.425 { 00:28:00.425 "name": "BaseBdev2", 00:28:00.425 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:28:00.425 "is_configured": true, 00:28:00.425 "data_offset": 2048, 00:28:00.425 "data_size": 63488 00:28:00.425 } 00:28:00.425 ] 00:28:00.425 }' 00:28:00.425 18:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:00.684 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:00.684 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:00.684 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:00.684 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:00.943 [2024-07-25 18:55:01.284950] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.943 [2024-07-25 18:55:01.368434] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:00.943 [2024-07-25 18:55:01.368638] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.943 [2024-07-25 18:55:01.368703] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.943 [2024-07-25 18:55:01.368779] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.943 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.206 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:01.207 "name": "raid_bdev1", 00:28:01.207 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:28:01.207 "strip_size_kb": 0, 00:28:01.207 "state": "online", 00:28:01.207 "raid_level": "raid1", 00:28:01.207 "superblock": true, 00:28:01.207 "num_base_bdevs": 2, 00:28:01.207 "num_base_bdevs_discovered": 1, 00:28:01.207 "num_base_bdevs_operational": 1, 00:28:01.207 "base_bdevs_list": [ 00:28:01.207 { 00:28:01.207 "name": null, 00:28:01.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.207 "is_configured": false, 00:28:01.207 "data_offset": 2048, 00:28:01.207 "data_size": 63488 00:28:01.207 }, 00:28:01.207 { 00:28:01.207 "name": "BaseBdev2", 00:28:01.207 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:28:01.207 "is_configured": true, 00:28:01.207 "data_offset": 2048, 00:28:01.207 "data_size": 63488 00:28:01.207 } 00:28:01.207 ] 00:28:01.207 }' 00:28:01.207 18:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:01.207 18:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.773 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:01.773 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:01.773 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:01.773 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:01.773 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:01.773 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.773 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.031 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.031 "name": "raid_bdev1", 00:28:02.031 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:28:02.031 "strip_size_kb": 0, 00:28:02.031 "state": "online", 00:28:02.031 "raid_level": "raid1", 00:28:02.031 "superblock": true, 00:28:02.031 "num_base_bdevs": 2, 00:28:02.031 "num_base_bdevs_discovered": 1, 00:28:02.031 "num_base_bdevs_operational": 1, 00:28:02.031 "base_bdevs_list": [ 00:28:02.031 { 00:28:02.031 "name": null, 00:28:02.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.031 "is_configured": false, 00:28:02.031 "data_offset": 2048, 00:28:02.031 "data_size": 63488 00:28:02.031 }, 00:28:02.031 { 00:28:02.031 "name": "BaseBdev2", 00:28:02.031 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:28:02.031 "is_configured": true, 00:28:02.031 "data_offset": 2048, 00:28:02.031 "data_size": 63488 00:28:02.031 } 00:28:02.031 ] 00:28:02.031 }' 00:28:02.032 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.032 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:02.032 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.032 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:02.032 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:02.290 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:02.549 [2024-07-25 18:55:02.976402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:02.549 [2024-07-25 18:55:02.976726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.550 [2024-07-25 18:55:02.976803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:02.550 [2024-07-25 18:55:02.976899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.550 [2024-07-25 18:55:02.977455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.550 [2024-07-25 18:55:02.977591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:02.550 [2024-07-25 18:55:02.977824] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:02.550 [2024-07-25 18:55:02.977910] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:02.550 [2024-07-25 18:55:02.977977] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:02.550 BaseBdev1 00:28:02.550 18:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.484 18:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.484 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.484 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.743 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.743 "name": "raid_bdev1", 00:28:03.743 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:28:03.743 "strip_size_kb": 0, 00:28:03.743 "state": "online", 00:28:03.743 "raid_level": "raid1", 00:28:03.743 "superblock": true, 00:28:03.743 "num_base_bdevs": 2, 00:28:03.743 "num_base_bdevs_discovered": 1, 00:28:03.743 "num_base_bdevs_operational": 1, 00:28:03.743 "base_bdevs_list": [ 00:28:03.743 { 00:28:03.743 "name": null, 00:28:03.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.743 "is_configured": false, 00:28:03.743 "data_offset": 2048, 00:28:03.743 "data_size": 63488 00:28:03.743 }, 00:28:03.743 { 00:28:03.743 "name": "BaseBdev2", 00:28:03.743 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:28:03.743 "is_configured": true, 00:28:03.743 "data_offset": 2048, 00:28:03.743 "data_size": 63488 00:28:03.743 } 00:28:03.743 ] 00:28:03.743 }' 00:28:03.743 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.743 18:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.677 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:04.677 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.677 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:04.677 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:04.677 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:04.677 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.677 18:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.677 18:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:04.677 "name": "raid_bdev1", 00:28:04.677 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:28:04.677 "strip_size_kb": 0, 00:28:04.677 "state": "online", 00:28:04.677 "raid_level": "raid1", 00:28:04.677 "superblock": true, 00:28:04.677 "num_base_bdevs": 2, 00:28:04.677 "num_base_bdevs_discovered": 1, 00:28:04.677 "num_base_bdevs_operational": 1, 00:28:04.677 "base_bdevs_list": [ 00:28:04.677 { 00:28:04.677 "name": null, 00:28:04.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.677 "is_configured": false, 00:28:04.677 "data_offset": 2048, 00:28:04.677 "data_size": 63488 00:28:04.677 }, 00:28:04.677 { 00:28:04.677 "name": "BaseBdev2", 00:28:04.677 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:28:04.677 "is_configured": true, 00:28:04.677 "data_offset": 2048, 00:28:04.677 "data_size": 63488 00:28:04.677 } 00:28:04.677 ] 00:28:04.677 }' 00:28:04.677 18:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:04.677 18:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:04.677 18:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:04.936 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.937 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:04.937 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.937 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:04.937 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:04.937 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:05.196 [2024-07-25 18:55:05.520440] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:05.196 [2024-07-25 18:55:05.520908] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:05.196 [2024-07-25 18:55:05.521014] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:05.196 request: 00:28:05.196 { 00:28:05.196 "base_bdev": "BaseBdev1", 00:28:05.196 "raid_bdev": "raid_bdev1", 00:28:05.196 "method": "bdev_raid_add_base_bdev", 00:28:05.196 "req_id": 1 00:28:05.196 } 00:28:05.196 Got JSON-RPC error response 00:28:05.196 response: 00:28:05.196 { 00:28:05.196 "code": -22, 00:28:05.196 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:05.196 } 00:28:05.196 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:28:05.196 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:05.196 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:05.196 18:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:05.196 18:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.136 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.395 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.395 "name": "raid_bdev1", 00:28:06.395 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:28:06.395 "strip_size_kb": 0, 00:28:06.395 "state": "online", 00:28:06.395 "raid_level": "raid1", 00:28:06.395 "superblock": true, 00:28:06.395 "num_base_bdevs": 2, 00:28:06.395 "num_base_bdevs_discovered": 1, 00:28:06.395 "num_base_bdevs_operational": 1, 00:28:06.395 "base_bdevs_list": [ 00:28:06.395 { 00:28:06.395 "name": null, 00:28:06.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.395 "is_configured": false, 00:28:06.395 "data_offset": 2048, 00:28:06.395 "data_size": 63488 00:28:06.395 }, 00:28:06.395 { 00:28:06.395 "name": "BaseBdev2", 00:28:06.395 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:28:06.395 "is_configured": true, 00:28:06.395 "data_offset": 2048, 00:28:06.395 "data_size": 63488 00:28:06.395 } 00:28:06.395 ] 00:28:06.395 }' 00:28:06.395 18:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.395 18:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.963 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:06.963 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:06.963 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:06.963 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:06.963 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:06.963 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.963 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:07.222 "name": "raid_bdev1", 00:28:07.222 "uuid": "ef9815f4-970f-48c8-908c-7b4d734867b0", 00:28:07.222 "strip_size_kb": 0, 00:28:07.222 "state": "online", 00:28:07.222 "raid_level": "raid1", 00:28:07.222 "superblock": true, 00:28:07.222 "num_base_bdevs": 2, 00:28:07.222 "num_base_bdevs_discovered": 1, 00:28:07.222 "num_base_bdevs_operational": 1, 00:28:07.222 "base_bdevs_list": [ 00:28:07.222 { 00:28:07.222 "name": null, 00:28:07.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.222 "is_configured": false, 00:28:07.222 "data_offset": 2048, 00:28:07.222 "data_size": 63488 00:28:07.222 }, 00:28:07.222 { 00:28:07.222 "name": "BaseBdev2", 00:28:07.222 "uuid": "5ae1a83d-d701-5bbd-918c-1598ab2ac4b7", 00:28:07.222 "is_configured": true, 00:28:07.222 "data_offset": 2048, 00:28:07.222 "data_size": 63488 00:28:07.222 } 00:28:07.222 ] 00:28:07.222 }' 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 143907 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 143907 ']' 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 143907 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 143907 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 143907' 00:28:07.222 killing process with pid 143907 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 143907 00:28:07.222 Received shutdown signal, test time was about 60.000000 seconds 00:28:07.222 00:28:07.222 Latency(us) 00:28:07.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.222 =================================================================================================================== 00:28:07.222 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:07.222 18:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 143907 00:28:07.222 [2024-07-25 18:55:07.692237] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:07.222 [2024-07-25 18:55:07.692389] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:07.222 [2024-07-25 18:55:07.692445] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:07.222 [2024-07-25 18:55:07.692541] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:28:07.480 [2024-07-25 18:55:08.016105] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:08.944 18:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:28:08.944 00:28:08.944 real 0m35.953s 00:28:08.944 user 0m51.593s 00:28:08.944 sys 0m6.328s 00:28:08.944 18:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:08.944 18:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:08.944 ************************************ 00:28:08.944 END TEST raid_rebuild_test_sb 00:28:08.944 ************************************ 00:28:09.204 18:55:09 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:28:09.204 18:55:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:09.204 18:55:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:09.204 18:55:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:09.204 ************************************ 00:28:09.204 START TEST raid_rebuild_test_io 00:28:09.204 ************************************ 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=144842 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 144842 /var/tmp/spdk-raid.sock 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 144842 ']' 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:09.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.204 18:55:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:09.204 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:09.204 Zero copy mechanism will not be used. 00:28:09.204 [2024-07-25 18:55:09.656012] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:09.204 [2024-07-25 18:55:09.656199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144842 ] 00:28:09.463 [2024-07-25 18:55:09.819069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.722 [2024-07-25 18:55:10.070173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.982 [2024-07-25 18:55:10.325101] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:09.982 18:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.982 18:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:28:09.982 18:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:09.982 18:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:10.241 BaseBdev1_malloc 00:28:10.241 18:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:10.500 [2024-07-25 18:55:10.943317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:10.500 [2024-07-25 18:55:10.943457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.500 [2024-07-25 18:55:10.943499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:10.500 [2024-07-25 18:55:10.943527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.500 [2024-07-25 18:55:10.946294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.500 [2024-07-25 18:55:10.946350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:10.500 BaseBdev1 00:28:10.500 18:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:10.500 18:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:10.759 BaseBdev2_malloc 00:28:10.759 18:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:11.019 [2024-07-25 18:55:11.423844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:11.019 [2024-07-25 18:55:11.423993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.019 [2024-07-25 18:55:11.424046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:11.019 [2024-07-25 18:55:11.424068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.019 [2024-07-25 18:55:11.426757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.019 [2024-07-25 18:55:11.426805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:11.019 BaseBdev2 00:28:11.019 18:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:11.278 spare_malloc 00:28:11.278 18:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:11.278 spare_delay 00:28:11.538 18:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:11.538 [2024-07-25 18:55:12.099711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:11.538 [2024-07-25 18:55:12.099846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.538 [2024-07-25 18:55:12.099889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:11.538 [2024-07-25 18:55:12.099917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.538 [2024-07-25 18:55:12.102687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.538 [2024-07-25 18:55:12.102743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:11.538 spare 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:11.797 [2024-07-25 18:55:12.283816] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:11.797 [2024-07-25 18:55:12.286130] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:11.797 [2024-07-25 18:55:12.286253] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:28:11.797 [2024-07-25 18:55:12.286263] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:11.797 [2024-07-25 18:55:12.286427] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:28:11.797 [2024-07-25 18:55:12.286785] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:28:11.797 [2024-07-25 18:55:12.286795] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:28:11.797 [2024-07-25 18:55:12.286989] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.797 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.056 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.056 "name": "raid_bdev1", 00:28:12.056 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:12.056 "strip_size_kb": 0, 00:28:12.056 "state": "online", 00:28:12.056 "raid_level": "raid1", 00:28:12.056 "superblock": false, 00:28:12.056 "num_base_bdevs": 2, 00:28:12.056 "num_base_bdevs_discovered": 2, 00:28:12.056 "num_base_bdevs_operational": 2, 00:28:12.056 "base_bdevs_list": [ 00:28:12.056 { 00:28:12.056 "name": "BaseBdev1", 00:28:12.056 "uuid": "33454aed-4fcb-5447-a05f-8a0050fdb6e7", 00:28:12.056 "is_configured": true, 00:28:12.056 "data_offset": 0, 00:28:12.056 "data_size": 65536 00:28:12.056 }, 00:28:12.056 { 00:28:12.056 "name": "BaseBdev2", 00:28:12.056 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:12.056 "is_configured": true, 00:28:12.056 "data_offset": 0, 00:28:12.056 "data_size": 65536 00:28:12.056 } 00:28:12.056 ] 00:28:12.056 }' 00:28:12.056 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.056 18:55:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:12.623 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:12.623 18:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:12.882 [2024-07-25 18:55:13.220150] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:12.882 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:28:12.882 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:12.882 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.882 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:28:12.882 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:28:12.882 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:12.882 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:13.138 [2024-07-25 18:55:13.518625] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:13.138 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:13.138 Zero copy mechanism will not be used. 00:28:13.138 Running I/O for 60 seconds... 00:28:13.138 [2024-07-25 18:55:13.610415] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:13.138 [2024-07-25 18:55:13.616024] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.138 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.139 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.396 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.396 "name": "raid_bdev1", 00:28:13.396 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:13.396 "strip_size_kb": 0, 00:28:13.396 "state": "online", 00:28:13.396 "raid_level": "raid1", 00:28:13.396 "superblock": false, 00:28:13.396 "num_base_bdevs": 2, 00:28:13.396 "num_base_bdevs_discovered": 1, 00:28:13.396 "num_base_bdevs_operational": 1, 00:28:13.396 "base_bdevs_list": [ 00:28:13.396 { 00:28:13.396 "name": null, 00:28:13.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.396 "is_configured": false, 00:28:13.396 "data_offset": 0, 00:28:13.396 "data_size": 65536 00:28:13.396 }, 00:28:13.396 { 00:28:13.396 "name": "BaseBdev2", 00:28:13.396 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:13.396 "is_configured": true, 00:28:13.396 "data_offset": 0, 00:28:13.396 "data_size": 65536 00:28:13.396 } 00:28:13.396 ] 00:28:13.396 }' 00:28:13.396 18:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.396 18:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:13.961 18:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:14.220 [2024-07-25 18:55:14.639241] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:14.220 18:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:14.220 [2024-07-25 18:55:14.708544] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:14.220 [2024-07-25 18:55:14.710871] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:14.478 [2024-07-25 18:55:14.829932] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:14.478 [2024-07-25 18:55:14.958094] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:14.478 [2024-07-25 18:55:14.958471] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:14.737 [2024-07-25 18:55:15.185315] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:14.737 [2024-07-25 18:55:15.186002] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:14.995 [2024-07-25 18:55:15.317499] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:15.255 [2024-07-25 18:55:15.663841] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:15.255 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.255 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:15.255 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:15.255 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:15.255 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:15.255 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.255 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.255 [2024-07-25 18:55:15.802483] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:15.514 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.514 "name": "raid_bdev1", 00:28:15.514 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:15.514 "strip_size_kb": 0, 00:28:15.514 "state": "online", 00:28:15.514 "raid_level": "raid1", 00:28:15.514 "superblock": false, 00:28:15.514 "num_base_bdevs": 2, 00:28:15.514 "num_base_bdevs_discovered": 2, 00:28:15.514 "num_base_bdevs_operational": 2, 00:28:15.514 "process": { 00:28:15.514 "type": "rebuild", 00:28:15.514 "target": "spare", 00:28:15.514 "progress": { 00:28:15.514 "blocks": 18432, 00:28:15.514 "percent": 28 00:28:15.514 } 00:28:15.514 }, 00:28:15.514 "base_bdevs_list": [ 00:28:15.514 { 00:28:15.514 "name": "spare", 00:28:15.514 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:15.514 "is_configured": true, 00:28:15.514 "data_offset": 0, 00:28:15.514 "data_size": 65536 00:28:15.514 }, 00:28:15.514 { 00:28:15.514 "name": "BaseBdev2", 00:28:15.514 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:15.514 "is_configured": true, 00:28:15.514 "data_offset": 0, 00:28:15.514 "data_size": 65536 00:28:15.514 } 00:28:15.514 ] 00:28:15.514 }' 00:28:15.514 18:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:15.514 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.514 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:15.514 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.514 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:15.773 [2024-07-25 18:55:16.157383] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:15.773 [2024-07-25 18:55:16.231216] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:15.773 [2024-07-25 18:55:16.280512] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:16.032 [2024-07-25 18:55:16.387938] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:16.032 [2024-07-25 18:55:16.404494] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.032 [2024-07-25 18:55:16.404796] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:16.032 [2024-07-25 18:55:16.404842] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:16.032 [2024-07-25 18:55:16.444169] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.032 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.291 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.291 "name": "raid_bdev1", 00:28:16.291 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:16.291 "strip_size_kb": 0, 00:28:16.291 "state": "online", 00:28:16.291 "raid_level": "raid1", 00:28:16.291 "superblock": false, 00:28:16.291 "num_base_bdevs": 2, 00:28:16.291 "num_base_bdevs_discovered": 1, 00:28:16.291 "num_base_bdevs_operational": 1, 00:28:16.291 "base_bdevs_list": [ 00:28:16.291 { 00:28:16.291 "name": null, 00:28:16.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.291 "is_configured": false, 00:28:16.291 "data_offset": 0, 00:28:16.291 "data_size": 65536 00:28:16.291 }, 00:28:16.291 { 00:28:16.291 "name": "BaseBdev2", 00:28:16.291 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:16.291 "is_configured": true, 00:28:16.291 "data_offset": 0, 00:28:16.291 "data_size": 65536 00:28:16.291 } 00:28:16.291 ] 00:28:16.291 }' 00:28:16.291 18:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.291 18:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:16.858 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:16.858 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:16.858 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:16.858 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:16.858 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:16.858 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.858 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.118 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.118 "name": "raid_bdev1", 00:28:17.118 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:17.118 "strip_size_kb": 0, 00:28:17.118 "state": "online", 00:28:17.118 "raid_level": "raid1", 00:28:17.118 "superblock": false, 00:28:17.118 "num_base_bdevs": 2, 00:28:17.118 "num_base_bdevs_discovered": 1, 00:28:17.118 "num_base_bdevs_operational": 1, 00:28:17.118 "base_bdevs_list": [ 00:28:17.118 { 00:28:17.118 "name": null, 00:28:17.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.118 "is_configured": false, 00:28:17.118 "data_offset": 0, 00:28:17.118 "data_size": 65536 00:28:17.118 }, 00:28:17.118 { 00:28:17.118 "name": "BaseBdev2", 00:28:17.118 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:17.118 "is_configured": true, 00:28:17.118 "data_offset": 0, 00:28:17.118 "data_size": 65536 00:28:17.118 } 00:28:17.118 ] 00:28:17.118 }' 00:28:17.118 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.118 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:17.118 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.377 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:17.377 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:17.377 [2024-07-25 18:55:17.945401] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:17.636 18:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:17.636 [2024-07-25 18:55:18.009631] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:17.636 [2024-07-25 18:55:18.011841] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:17.636 [2024-07-25 18:55:18.118536] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:17.636 [2024-07-25 18:55:18.119334] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:17.895 [2024-07-25 18:55:18.336096] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:17.895 [2024-07-25 18:55:18.336619] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:18.462 [2024-07-25 18:55:18.793682] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:18.462 [2024-07-25 18:55:18.794352] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:18.462 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:18.462 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:18.462 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:18.462 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:18.463 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:18.463 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.463 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.722 [2024-07-25 18:55:19.152470] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:18.722 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:18.722 "name": "raid_bdev1", 00:28:18.722 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:18.722 "strip_size_kb": 0, 00:28:18.722 "state": "online", 00:28:18.722 "raid_level": "raid1", 00:28:18.722 "superblock": false, 00:28:18.722 "num_base_bdevs": 2, 00:28:18.722 "num_base_bdevs_discovered": 2, 00:28:18.722 "num_base_bdevs_operational": 2, 00:28:18.722 "process": { 00:28:18.722 "type": "rebuild", 00:28:18.722 "target": "spare", 00:28:18.722 "progress": { 00:28:18.722 "blocks": 14336, 00:28:18.722 "percent": 21 00:28:18.722 } 00:28:18.722 }, 00:28:18.722 "base_bdevs_list": [ 00:28:18.722 { 00:28:18.722 "name": "spare", 00:28:18.722 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:18.722 "is_configured": true, 00:28:18.722 "data_offset": 0, 00:28:18.722 "data_size": 65536 00:28:18.722 }, 00:28:18.722 { 00:28:18.722 "name": "BaseBdev2", 00:28:18.722 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:18.722 "is_configured": true, 00:28:18.722 "data_offset": 0, 00:28:18.722 "data_size": 65536 00:28:18.722 } 00:28:18.722 ] 00:28:18.722 }' 00:28:18.722 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:18.981 [2024-07-25 18:55:19.303807] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:18.981 [2024-07-25 18:55:19.310083] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=856 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.981 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.240 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:19.240 "name": "raid_bdev1", 00:28:19.240 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:19.240 "strip_size_kb": 0, 00:28:19.240 "state": "online", 00:28:19.240 "raid_level": "raid1", 00:28:19.240 "superblock": false, 00:28:19.240 "num_base_bdevs": 2, 00:28:19.240 "num_base_bdevs_discovered": 2, 00:28:19.240 "num_base_bdevs_operational": 2, 00:28:19.240 "process": { 00:28:19.240 "type": "rebuild", 00:28:19.240 "target": "spare", 00:28:19.240 "progress": { 00:28:19.240 "blocks": 18432, 00:28:19.240 "percent": 28 00:28:19.240 } 00:28:19.240 }, 00:28:19.240 "base_bdevs_list": [ 00:28:19.240 { 00:28:19.240 "name": "spare", 00:28:19.240 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:19.240 "is_configured": true, 00:28:19.240 "data_offset": 0, 00:28:19.240 "data_size": 65536 00:28:19.240 }, 00:28:19.240 { 00:28:19.240 "name": "BaseBdev2", 00:28:19.240 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:19.240 "is_configured": true, 00:28:19.240 "data_offset": 0, 00:28:19.240 "data_size": 65536 00:28:19.240 } 00:28:19.240 ] 00:28:19.240 }' 00:28:19.240 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:19.240 [2024-07-25 18:55:19.659498] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:19.240 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:19.240 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:19.240 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:19.240 18:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:19.498 [2024-07-25 18:55:19.883572] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:19.756 [2024-07-25 18:55:20.111398] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.322 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.580 18:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:20.580 "name": "raid_bdev1", 00:28:20.580 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:20.580 "strip_size_kb": 0, 00:28:20.580 "state": "online", 00:28:20.580 "raid_level": "raid1", 00:28:20.580 "superblock": false, 00:28:20.580 "num_base_bdevs": 2, 00:28:20.580 "num_base_bdevs_discovered": 2, 00:28:20.580 "num_base_bdevs_operational": 2, 00:28:20.580 "process": { 00:28:20.580 "type": "rebuild", 00:28:20.580 "target": "spare", 00:28:20.580 "progress": { 00:28:20.580 "blocks": 40960, 00:28:20.580 "percent": 62 00:28:20.580 } 00:28:20.580 }, 00:28:20.580 "base_bdevs_list": [ 00:28:20.580 { 00:28:20.580 "name": "spare", 00:28:20.580 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:20.580 "is_configured": true, 00:28:20.580 "data_offset": 0, 00:28:20.580 "data_size": 65536 00:28:20.580 }, 00:28:20.580 { 00:28:20.580 "name": "BaseBdev2", 00:28:20.580 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:20.580 "is_configured": true, 00:28:20.580 "data_offset": 0, 00:28:20.580 "data_size": 65536 00:28:20.580 } 00:28:20.580 ] 00:28:20.580 }' 00:28:20.580 18:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:20.580 18:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:20.580 18:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:20.580 18:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:20.580 18:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:21.147 [2024-07-25 18:55:21.530615] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.713 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.972 [2024-07-25 18:55:22.305831] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:21.972 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:21.972 "name": "raid_bdev1", 00:28:21.972 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:21.972 "strip_size_kb": 0, 00:28:21.972 "state": "online", 00:28:21.972 "raid_level": "raid1", 00:28:21.972 "superblock": false, 00:28:21.972 "num_base_bdevs": 2, 00:28:21.972 "num_base_bdevs_discovered": 2, 00:28:21.972 "num_base_bdevs_operational": 2, 00:28:21.972 "process": { 00:28:21.972 "type": "rebuild", 00:28:21.972 "target": "spare", 00:28:21.972 "progress": { 00:28:21.972 "blocks": 65536, 00:28:21.972 "percent": 100 00:28:21.972 } 00:28:21.972 }, 00:28:21.972 "base_bdevs_list": [ 00:28:21.972 { 00:28:21.972 "name": "spare", 00:28:21.972 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:21.972 "is_configured": true, 00:28:21.972 "data_offset": 0, 00:28:21.972 "data_size": 65536 00:28:21.972 }, 00:28:21.972 { 00:28:21.972 "name": "BaseBdev2", 00:28:21.972 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:21.972 "is_configured": true, 00:28:21.972 "data_offset": 0, 00:28:21.972 "data_size": 65536 00:28:21.972 } 00:28:21.972 ] 00:28:21.972 }' 00:28:21.972 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:21.972 [2024-07-25 18:55:22.405876] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:21.972 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:21.972 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:21.972 [2024-07-25 18:55:22.416262] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.972 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:21.972 18:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.906 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.171 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.171 "name": "raid_bdev1", 00:28:23.171 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:23.171 "strip_size_kb": 0, 00:28:23.171 "state": "online", 00:28:23.171 "raid_level": "raid1", 00:28:23.171 "superblock": false, 00:28:23.171 "num_base_bdevs": 2, 00:28:23.171 "num_base_bdevs_discovered": 2, 00:28:23.171 "num_base_bdevs_operational": 2, 00:28:23.171 "base_bdevs_list": [ 00:28:23.171 { 00:28:23.171 "name": "spare", 00:28:23.171 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:23.171 "is_configured": true, 00:28:23.171 "data_offset": 0, 00:28:23.171 "data_size": 65536 00:28:23.171 }, 00:28:23.171 { 00:28:23.171 "name": "BaseBdev2", 00:28:23.171 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:23.171 "is_configured": true, 00:28:23.171 "data_offset": 0, 00:28:23.171 "data_size": 65536 00:28:23.171 } 00:28:23.171 ] 00:28:23.171 }' 00:28:23.171 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:23.429 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:23.430 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.430 18:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.687 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.687 "name": "raid_bdev1", 00:28:23.687 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:23.687 "strip_size_kb": 0, 00:28:23.687 "state": "online", 00:28:23.687 "raid_level": "raid1", 00:28:23.687 "superblock": false, 00:28:23.687 "num_base_bdevs": 2, 00:28:23.687 "num_base_bdevs_discovered": 2, 00:28:23.687 "num_base_bdevs_operational": 2, 00:28:23.687 "base_bdevs_list": [ 00:28:23.687 { 00:28:23.687 "name": "spare", 00:28:23.687 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:23.687 "is_configured": true, 00:28:23.688 "data_offset": 0, 00:28:23.688 "data_size": 65536 00:28:23.688 }, 00:28:23.688 { 00:28:23.688 "name": "BaseBdev2", 00:28:23.688 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:23.688 "is_configured": true, 00:28:23.688 "data_offset": 0, 00:28:23.688 "data_size": 65536 00:28:23.688 } 00:28:23.688 ] 00:28:23.688 }' 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.688 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.946 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.946 "name": "raid_bdev1", 00:28:23.946 "uuid": "567ffc93-c1c9-490f-83fc-398583b2046f", 00:28:23.946 "strip_size_kb": 0, 00:28:23.946 "state": "online", 00:28:23.946 "raid_level": "raid1", 00:28:23.946 "superblock": false, 00:28:23.946 "num_base_bdevs": 2, 00:28:23.946 "num_base_bdevs_discovered": 2, 00:28:23.946 "num_base_bdevs_operational": 2, 00:28:23.946 "base_bdevs_list": [ 00:28:23.946 { 00:28:23.946 "name": "spare", 00:28:23.946 "uuid": "8294ec28-0e90-5467-87da-fd424c8c9f82", 00:28:23.946 "is_configured": true, 00:28:23.946 "data_offset": 0, 00:28:23.946 "data_size": 65536 00:28:23.946 }, 00:28:23.946 { 00:28:23.946 "name": "BaseBdev2", 00:28:23.946 "uuid": "e4a405a2-e2fa-5a25-aae8-960fad654ef1", 00:28:23.946 "is_configured": true, 00:28:23.946 "data_offset": 0, 00:28:23.946 "data_size": 65536 00:28:23.946 } 00:28:23.946 ] 00:28:23.946 }' 00:28:23.946 18:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.946 18:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:24.513 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:24.772 [2024-07-25 18:55:25.265468] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:24.772 [2024-07-25 18:55:25.265796] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:25.031 00:28:25.031 Latency(us) 00:28:25.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.031 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:25.031 raid_bdev1 : 11.84 110.76 332.29 0.00 0.00 12915.62 310.13 111348.78 00:28:25.031 =================================================================================================================== 00:28:25.031 Total : 110.76 332.29 0.00 0.00 12915.62 310.13 111348.78 00:28:25.031 [2024-07-25 18:55:25.380800] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.031 [2024-07-25 18:55:25.380971] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:25.031 [2024-07-25 18:55:25.381090] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:25.031 [2024-07-25 18:55:25.381172] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:28:25.031 0 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:25.031 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:25.290 /dev/nbd0 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:25.550 1+0 records in 00:28:25.550 1+0 records out 00:28:25.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429568 s, 9.5 MB/s 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:25.550 18:55:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:25.809 /dev/nbd1 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:25.809 1+0 records in 00:28:25.809 1+0 records out 00:28:25.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329979 s, 12.4 MB/s 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:25.809 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:26.068 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:26.068 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:26.068 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:26.068 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:26.068 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:26.068 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:26.068 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:26.327 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 144842 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 144842 ']' 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 144842 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 144842 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 144842' 00:28:26.586 killing process with pid 144842 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 144842 00:28:26.586 Received shutdown signal, test time was about 13.440702 seconds 00:28:26.586 00:28:26.586 Latency(us) 00:28:26.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.586 =================================================================================================================== 00:28:26.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.586 [2024-07-25 18:55:26.962006] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:26.586 18:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 144842 00:28:26.845 [2024-07-25 18:55:27.223295] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:28.222 18:55:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:28:28.222 00:28:28.222 real 0m19.183s 00:28:28.222 user 0m27.940s 00:28:28.222 sys 0m2.738s 00:28:28.222 18:55:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:28.222 18:55:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:28.222 ************************************ 00:28:28.222 END TEST raid_rebuild_test_io 00:28:28.222 ************************************ 00:28:28.481 18:55:28 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:28:28.481 18:55:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:28.481 18:55:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.481 18:55:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:28.481 ************************************ 00:28:28.481 START TEST raid_rebuild_test_sb_io 00:28:28.481 ************************************ 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:28:28.481 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=145333 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 145333 /var/tmp/spdk-raid.sock 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 145333 ']' 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:28.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:28.482 18:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:28.482 [2024-07-25 18:55:28.936242] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:28.482 [2024-07-25 18:55:28.936489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145333 ] 00:28:28.482 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:28.482 Zero copy mechanism will not be used. 00:28:28.740 [2024-07-25 18:55:29.119806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.998 [2024-07-25 18:55:29.385156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.256 [2024-07-25 18:55:29.658219] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:29.515 18:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:29.515 18:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:28:29.515 18:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:29.515 18:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:29.515 BaseBdev1_malloc 00:28:29.774 18:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:29.774 [2024-07-25 18:55:30.267632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:29.774 [2024-07-25 18:55:30.267776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.774 [2024-07-25 18:55:30.267828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:29.774 [2024-07-25 18:55:30.267850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.774 [2024-07-25 18:55:30.270637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.774 [2024-07-25 18:55:30.270692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:29.774 BaseBdev1 00:28:29.774 18:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:29.774 18:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:30.033 BaseBdev2_malloc 00:28:30.033 18:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:30.292 [2024-07-25 18:55:30.686805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:30.292 [2024-07-25 18:55:30.686950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.292 [2024-07-25 18:55:30.687011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:30.292 [2024-07-25 18:55:30.687033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.292 [2024-07-25 18:55:30.689704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.292 [2024-07-25 18:55:30.689786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:30.292 BaseBdev2 00:28:30.292 18:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:30.551 spare_malloc 00:28:30.551 18:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:30.810 spare_delay 00:28:30.810 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:30.810 [2024-07-25 18:55:31.326498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:30.810 [2024-07-25 18:55:31.326633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.810 [2024-07-25 18:55:31.326677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:30.810 [2024-07-25 18:55:31.326706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.810 [2024-07-25 18:55:31.329424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.810 [2024-07-25 18:55:31.329482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:30.810 spare 00:28:30.810 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:31.069 [2024-07-25 18:55:31.502599] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:31.069 [2024-07-25 18:55:31.504969] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:31.069 [2024-07-25 18:55:31.505168] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:28:31.069 [2024-07-25 18:55:31.505179] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:31.069 [2024-07-25 18:55:31.505336] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:28:31.069 [2024-07-25 18:55:31.505700] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:28:31.069 [2024-07-25 18:55:31.505719] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:28:31.069 [2024-07-25 18:55:31.505909] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.069 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:31.069 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.069 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.070 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.329 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.329 "name": "raid_bdev1", 00:28:31.329 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:31.329 "strip_size_kb": 0, 00:28:31.329 "state": "online", 00:28:31.329 "raid_level": "raid1", 00:28:31.329 "superblock": true, 00:28:31.329 "num_base_bdevs": 2, 00:28:31.329 "num_base_bdevs_discovered": 2, 00:28:31.329 "num_base_bdevs_operational": 2, 00:28:31.329 "base_bdevs_list": [ 00:28:31.329 { 00:28:31.329 "name": "BaseBdev1", 00:28:31.329 "uuid": "7a089ab2-4357-56d6-8938-8b1344ec03f0", 00:28:31.329 "is_configured": true, 00:28:31.329 "data_offset": 2048, 00:28:31.329 "data_size": 63488 00:28:31.329 }, 00:28:31.329 { 00:28:31.329 "name": "BaseBdev2", 00:28:31.329 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:31.329 "is_configured": true, 00:28:31.329 "data_offset": 2048, 00:28:31.329 "data_size": 63488 00:28:31.329 } 00:28:31.329 ] 00:28:31.329 }' 00:28:31.329 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.329 18:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:31.895 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:31.895 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:31.895 [2024-07-25 18:55:32.422959] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:31.895 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:28:31.895 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.895 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:32.154 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:28:32.154 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:28:32.154 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:32.154 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:32.154 [2024-07-25 18:55:32.713678] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:32.154 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:32.154 Zero copy mechanism will not be used. 00:28:32.154 Running I/O for 60 seconds... 00:28:32.413 [2024-07-25 18:55:32.796472] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:32.413 [2024-07-25 18:55:32.802121] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.413 18:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.672 18:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:32.672 "name": "raid_bdev1", 00:28:32.672 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:32.672 "strip_size_kb": 0, 00:28:32.672 "state": "online", 00:28:32.672 "raid_level": "raid1", 00:28:32.672 "superblock": true, 00:28:32.672 "num_base_bdevs": 2, 00:28:32.672 "num_base_bdevs_discovered": 1, 00:28:32.672 "num_base_bdevs_operational": 1, 00:28:32.672 "base_bdevs_list": [ 00:28:32.672 { 00:28:32.672 "name": null, 00:28:32.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.672 "is_configured": false, 00:28:32.672 "data_offset": 2048, 00:28:32.672 "data_size": 63488 00:28:32.672 }, 00:28:32.672 { 00:28:32.672 "name": "BaseBdev2", 00:28:32.672 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:32.672 "is_configured": true, 00:28:32.672 "data_offset": 2048, 00:28:32.672 "data_size": 63488 00:28:32.672 } 00:28:32.672 ] 00:28:32.672 }' 00:28:32.672 18:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:32.672 18:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.243 18:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:33.502 [2024-07-25 18:55:33.957952] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:33.502 18:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:33.502 [2024-07-25 18:55:34.027456] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:33.502 [2024-07-25 18:55:34.029637] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:33.760 [2024-07-25 18:55:34.144401] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:33.760 [2024-07-25 18:55:34.144902] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:34.017 [2024-07-25 18:55:34.354729] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:34.017 [2024-07-25 18:55:34.355104] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:34.274 [2024-07-25 18:55:34.796189] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:34.274 [2024-07-25 18:55:34.796583] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:34.533 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.533 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:34.533 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:34.533 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:34.533 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:34.533 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.533 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.791 [2024-07-25 18:55:35.175224] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:34.791 [2024-07-25 18:55:35.175579] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:34.791 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:34.791 "name": "raid_bdev1", 00:28:34.791 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:34.791 "strip_size_kb": 0, 00:28:34.791 "state": "online", 00:28:34.791 "raid_level": "raid1", 00:28:34.791 "superblock": true, 00:28:34.791 "num_base_bdevs": 2, 00:28:34.791 "num_base_bdevs_discovered": 2, 00:28:34.791 "num_base_bdevs_operational": 2, 00:28:34.791 "process": { 00:28:34.791 "type": "rebuild", 00:28:34.791 "target": "spare", 00:28:34.791 "progress": { 00:28:34.791 "blocks": 16384, 00:28:34.791 "percent": 25 00:28:34.791 } 00:28:34.791 }, 00:28:34.791 "base_bdevs_list": [ 00:28:34.791 { 00:28:34.791 "name": "spare", 00:28:34.791 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:34.791 "is_configured": true, 00:28:34.791 "data_offset": 2048, 00:28:34.791 "data_size": 63488 00:28:34.791 }, 00:28:34.791 { 00:28:34.791 "name": "BaseBdev2", 00:28:34.791 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:34.791 "is_configured": true, 00:28:34.791 "data_offset": 2048, 00:28:34.791 "data_size": 63488 00:28:34.791 } 00:28:34.791 ] 00:28:34.791 }' 00:28:34.791 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:34.791 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:34.791 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.049 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.049 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:35.049 [2024-07-25 18:55:35.429109] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:35.049 [2024-07-25 18:55:35.435353] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:35.049 [2024-07-25 18:55:35.558677] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:35.307 [2024-07-25 18:55:35.644111] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:35.307 [2024-07-25 18:55:35.644469] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:35.307 [2024-07-25 18:55:35.746848] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:35.307 [2024-07-25 18:55:35.749679] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.307 [2024-07-25 18:55:35.749715] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:35.307 [2024-07-25 18:55:35.749725] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:35.308 [2024-07-25 18:55:35.796701] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.308 18:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.565 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.565 "name": "raid_bdev1", 00:28:35.565 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:35.565 "strip_size_kb": 0, 00:28:35.565 "state": "online", 00:28:35.565 "raid_level": "raid1", 00:28:35.565 "superblock": true, 00:28:35.565 "num_base_bdevs": 2, 00:28:35.565 "num_base_bdevs_discovered": 1, 00:28:35.565 "num_base_bdevs_operational": 1, 00:28:35.565 "base_bdevs_list": [ 00:28:35.565 { 00:28:35.565 "name": null, 00:28:35.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.565 "is_configured": false, 00:28:35.565 "data_offset": 2048, 00:28:35.565 "data_size": 63488 00:28:35.565 }, 00:28:35.565 { 00:28:35.565 "name": "BaseBdev2", 00:28:35.565 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:35.565 "is_configured": true, 00:28:35.565 "data_offset": 2048, 00:28:35.565 "data_size": 63488 00:28:35.565 } 00:28:35.565 ] 00:28:35.565 }' 00:28:35.565 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.566 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:36.131 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:36.131 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.131 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:36.131 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:36.131 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.132 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.132 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.390 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:36.390 "name": "raid_bdev1", 00:28:36.390 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:36.390 "strip_size_kb": 0, 00:28:36.390 "state": "online", 00:28:36.390 "raid_level": "raid1", 00:28:36.390 "superblock": true, 00:28:36.390 "num_base_bdevs": 2, 00:28:36.390 "num_base_bdevs_discovered": 1, 00:28:36.390 "num_base_bdevs_operational": 1, 00:28:36.390 "base_bdevs_list": [ 00:28:36.390 { 00:28:36.390 "name": null, 00:28:36.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.390 "is_configured": false, 00:28:36.390 "data_offset": 2048, 00:28:36.390 "data_size": 63488 00:28:36.390 }, 00:28:36.390 { 00:28:36.390 "name": "BaseBdev2", 00:28:36.390 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:36.390 "is_configured": true, 00:28:36.390 "data_offset": 2048, 00:28:36.390 "data_size": 63488 00:28:36.390 } 00:28:36.390 ] 00:28:36.390 }' 00:28:36.390 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:36.390 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:36.390 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:36.390 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:36.390 18:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:36.647 [2024-07-25 18:55:37.131831] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:36.647 [2024-07-25 18:55:37.194159] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:36.648 [2024-07-25 18:55:37.196450] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:36.648 18:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:36.917 [2024-07-25 18:55:37.305611] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:36.917 [2024-07-25 18:55:37.306145] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:37.199 [2024-07-25 18:55:37.527623] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:37.199 [2024-07-25 18:55:37.527980] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:37.199 [2024-07-25 18:55:37.768022] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:37.457 [2024-07-25 18:55:37.983268] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:37.457 [2024-07-25 18:55:37.983546] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:37.715 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.715 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:37.715 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:37.715 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:37.715 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:37.715 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.715 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.715 [2024-07-25 18:55:38.219842] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:37.973 [2024-07-25 18:55:38.329709] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:37.973 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:37.973 "name": "raid_bdev1", 00:28:37.973 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:37.973 "strip_size_kb": 0, 00:28:37.973 "state": "online", 00:28:37.973 "raid_level": "raid1", 00:28:37.973 "superblock": true, 00:28:37.973 "num_base_bdevs": 2, 00:28:37.973 "num_base_bdevs_discovered": 2, 00:28:37.973 "num_base_bdevs_operational": 2, 00:28:37.973 "process": { 00:28:37.973 "type": "rebuild", 00:28:37.973 "target": "spare", 00:28:37.973 "progress": { 00:28:37.973 "blocks": 16384, 00:28:37.973 "percent": 25 00:28:37.973 } 00:28:37.973 }, 00:28:37.973 "base_bdevs_list": [ 00:28:37.973 { 00:28:37.973 "name": "spare", 00:28:37.973 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:37.973 "is_configured": true, 00:28:37.973 "data_offset": 2048, 00:28:37.973 "data_size": 63488 00:28:37.973 }, 00:28:37.973 { 00:28:37.973 "name": "BaseBdev2", 00:28:37.973 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:37.973 "is_configured": true, 00:28:37.973 "data_offset": 2048, 00:28:37.973 "data_size": 63488 00:28:37.973 } 00:28:37.973 ] 00:28:37.973 }' 00:28:37.973 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:37.973 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:37.973 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:28:38.232 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=875 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.232 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.232 [2024-07-25 18:55:38.654297] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:38.490 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.490 "name": "raid_bdev1", 00:28:38.490 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:38.490 "strip_size_kb": 0, 00:28:38.490 "state": "online", 00:28:38.490 "raid_level": "raid1", 00:28:38.490 "superblock": true, 00:28:38.490 "num_base_bdevs": 2, 00:28:38.490 "num_base_bdevs_discovered": 2, 00:28:38.491 "num_base_bdevs_operational": 2, 00:28:38.491 "process": { 00:28:38.491 "type": "rebuild", 00:28:38.491 "target": "spare", 00:28:38.491 "progress": { 00:28:38.491 "blocks": 20480, 00:28:38.491 "percent": 32 00:28:38.491 } 00:28:38.491 }, 00:28:38.491 "base_bdevs_list": [ 00:28:38.491 { 00:28:38.491 "name": "spare", 00:28:38.491 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:38.491 "is_configured": true, 00:28:38.491 "data_offset": 2048, 00:28:38.491 "data_size": 63488 00:28:38.491 }, 00:28:38.491 { 00:28:38.491 "name": "BaseBdev2", 00:28:38.491 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:38.491 "is_configured": true, 00:28:38.491 "data_offset": 2048, 00:28:38.491 "data_size": 63488 00:28:38.491 } 00:28:38.491 ] 00:28:38.491 }' 00:28:38.491 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.491 [2024-07-25 18:55:38.856226] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:38.491 [2024-07-25 18:55:38.856584] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:38.491 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.491 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.491 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.491 18:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:39.058 [2024-07-25 18:55:39.442861] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:39.626 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:39.627 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:39.627 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.627 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:39.627 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:39.627 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:39.627 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.627 18:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.627 [2024-07-25 18:55:40.111361] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:39.627 18:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.627 "name": "raid_bdev1", 00:28:39.627 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:39.627 "strip_size_kb": 0, 00:28:39.627 "state": "online", 00:28:39.627 "raid_level": "raid1", 00:28:39.627 "superblock": true, 00:28:39.627 "num_base_bdevs": 2, 00:28:39.627 "num_base_bdevs_discovered": 2, 00:28:39.627 "num_base_bdevs_operational": 2, 00:28:39.627 "process": { 00:28:39.627 "type": "rebuild", 00:28:39.627 "target": "spare", 00:28:39.627 "progress": { 00:28:39.627 "blocks": 45056, 00:28:39.627 "percent": 70 00:28:39.627 } 00:28:39.627 }, 00:28:39.627 "base_bdevs_list": [ 00:28:39.627 { 00:28:39.627 "name": "spare", 00:28:39.627 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:39.627 "is_configured": true, 00:28:39.627 "data_offset": 2048, 00:28:39.627 "data_size": 63488 00:28:39.627 }, 00:28:39.627 { 00:28:39.627 "name": "BaseBdev2", 00:28:39.627 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:39.627 "is_configured": true, 00:28:39.627 "data_offset": 2048, 00:28:39.627 "data_size": 63488 00:28:39.627 } 00:28:39.627 ] 00:28:39.627 }' 00:28:39.627 18:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.885 18:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:39.885 18:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.885 18:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.885 18:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:40.450 [2024-07-25 18:55:40.881112] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:28:40.708 [2024-07-25 18:55:41.095380] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.708 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.967 [2024-07-25 18:55:41.425969] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:40.967 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.967 "name": "raid_bdev1", 00:28:40.967 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:40.967 "strip_size_kb": 0, 00:28:40.967 "state": "online", 00:28:40.967 "raid_level": "raid1", 00:28:40.967 "superblock": true, 00:28:40.967 "num_base_bdevs": 2, 00:28:40.967 "num_base_bdevs_discovered": 2, 00:28:40.967 "num_base_bdevs_operational": 2, 00:28:40.967 "process": { 00:28:40.967 "type": "rebuild", 00:28:40.967 "target": "spare", 00:28:40.967 "progress": { 00:28:40.967 "blocks": 63488, 00:28:40.967 "percent": 100 00:28:40.967 } 00:28:40.967 }, 00:28:40.967 "base_bdevs_list": [ 00:28:40.967 { 00:28:40.967 "name": "spare", 00:28:40.967 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:40.967 "is_configured": true, 00:28:40.967 "data_offset": 2048, 00:28:40.967 "data_size": 63488 00:28:40.967 }, 00:28:40.967 { 00:28:40.967 "name": "BaseBdev2", 00:28:40.967 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:40.967 "is_configured": true, 00:28:40.967 "data_offset": 2048, 00:28:40.967 "data_size": 63488 00:28:40.967 } 00:28:40.967 ] 00:28:40.967 }' 00:28:40.967 [2024-07-25 18:55:41.525936] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:40.967 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.967 [2024-07-25 18:55:41.528670] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:41.226 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:41.226 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.226 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:41.226 18:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.163 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.422 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.422 "name": "raid_bdev1", 00:28:42.422 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:42.422 "strip_size_kb": 0, 00:28:42.422 "state": "online", 00:28:42.422 "raid_level": "raid1", 00:28:42.422 "superblock": true, 00:28:42.422 "num_base_bdevs": 2, 00:28:42.422 "num_base_bdevs_discovered": 2, 00:28:42.422 "num_base_bdevs_operational": 2, 00:28:42.422 "base_bdevs_list": [ 00:28:42.422 { 00:28:42.422 "name": "spare", 00:28:42.422 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:42.422 "is_configured": true, 00:28:42.422 "data_offset": 2048, 00:28:42.422 "data_size": 63488 00:28:42.422 }, 00:28:42.422 { 00:28:42.422 "name": "BaseBdev2", 00:28:42.422 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:42.422 "is_configured": true, 00:28:42.422 "data_offset": 2048, 00:28:42.422 "data_size": 63488 00:28:42.422 } 00:28:42.422 ] 00:28:42.422 }' 00:28:42.422 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.422 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:42.422 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.422 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:42.422 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:28:42.423 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:42.423 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.423 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:42.423 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:42.423 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.423 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.423 18:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.682 "name": "raid_bdev1", 00:28:42.682 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:42.682 "strip_size_kb": 0, 00:28:42.682 "state": "online", 00:28:42.682 "raid_level": "raid1", 00:28:42.682 "superblock": true, 00:28:42.682 "num_base_bdevs": 2, 00:28:42.682 "num_base_bdevs_discovered": 2, 00:28:42.682 "num_base_bdevs_operational": 2, 00:28:42.682 "base_bdevs_list": [ 00:28:42.682 { 00:28:42.682 "name": "spare", 00:28:42.682 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:42.682 "is_configured": true, 00:28:42.682 "data_offset": 2048, 00:28:42.682 "data_size": 63488 00:28:42.682 }, 00:28:42.682 { 00:28:42.682 "name": "BaseBdev2", 00:28:42.682 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:42.682 "is_configured": true, 00:28:42.682 "data_offset": 2048, 00:28:42.682 "data_size": 63488 00:28:42.682 } 00:28:42.682 ] 00:28:42.682 }' 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.682 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.942 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.942 "name": "raid_bdev1", 00:28:42.942 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:42.942 "strip_size_kb": 0, 00:28:42.942 "state": "online", 00:28:42.942 "raid_level": "raid1", 00:28:42.942 "superblock": true, 00:28:42.942 "num_base_bdevs": 2, 00:28:42.942 "num_base_bdevs_discovered": 2, 00:28:42.942 "num_base_bdevs_operational": 2, 00:28:42.942 "base_bdevs_list": [ 00:28:42.942 { 00:28:42.942 "name": "spare", 00:28:42.942 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:42.942 "is_configured": true, 00:28:42.942 "data_offset": 2048, 00:28:42.942 "data_size": 63488 00:28:42.942 }, 00:28:42.942 { 00:28:42.942 "name": "BaseBdev2", 00:28:42.942 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:42.942 "is_configured": true, 00:28:42.942 "data_offset": 2048, 00:28:42.942 "data_size": 63488 00:28:42.942 } 00:28:42.942 ] 00:28:42.942 }' 00:28:42.942 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.942 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:43.510 18:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:43.510 [2024-07-25 18:55:44.084617] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:43.510 [2024-07-25 18:55:44.084659] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:43.769 00:28:43.769 Latency(us) 00:28:43.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.769 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:43.769 raid_bdev1 : 11.45 108.49 325.47 0.00 0.00 12950.36 296.47 114344.72 00:28:43.769 =================================================================================================================== 00:28:43.769 Total : 108.49 325.47 0.00 0.00 12950.36 296.47 114344.72 00:28:43.769 [2024-07-25 18:55:44.186795] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:43.769 [2024-07-25 18:55:44.186833] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:43.769 [2024-07-25 18:55:44.186916] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:43.769 [2024-07-25 18:55:44.186926] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:28:43.769 0 00:28:43.769 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.769 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:44.028 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:44.029 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:44.288 /dev/nbd0 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:44.288 1+0 records in 00:28:44.288 1+0 records out 00:28:44.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344945 s, 11.9 MB/s 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:44.288 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:44.548 /dev/nbd1 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:44.548 1+0 records in 00:28:44.548 1+0 records out 00:28:44.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595947 s, 6.9 MB/s 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:44.548 18:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:44.548 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:44.548 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:44.548 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:44.548 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.548 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:44.548 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.548 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:45.115 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:45.115 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:45.115 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:45.115 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:45.115 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:45.116 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:45.374 18:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:45.633 [2024-07-25 18:55:46.046343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:45.633 [2024-07-25 18:55:46.046446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.633 [2024-07-25 18:55:46.046505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:45.633 [2024-07-25 18:55:46.046535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.633 [2024-07-25 18:55:46.049224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.633 [2024-07-25 18:55:46.049277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:45.633 [2024-07-25 18:55:46.049414] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:45.633 [2024-07-25 18:55:46.049472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:45.633 [2024-07-25 18:55:46.049630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:45.633 spare 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.633 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.633 [2024-07-25 18:55:46.149716] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:28:45.633 [2024-07-25 18:55:46.149739] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:45.633 [2024-07-25 18:55:46.149914] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:28:45.633 [2024-07-25 18:55:46.150285] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:28:45.633 [2024-07-25 18:55:46.150295] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:28:45.633 [2024-07-25 18:55:46.150456] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.892 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.892 "name": "raid_bdev1", 00:28:45.892 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:45.892 "strip_size_kb": 0, 00:28:45.892 "state": "online", 00:28:45.892 "raid_level": "raid1", 00:28:45.892 "superblock": true, 00:28:45.892 "num_base_bdevs": 2, 00:28:45.892 "num_base_bdevs_discovered": 2, 00:28:45.892 "num_base_bdevs_operational": 2, 00:28:45.892 "base_bdevs_list": [ 00:28:45.892 { 00:28:45.892 "name": "spare", 00:28:45.892 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:45.892 "is_configured": true, 00:28:45.892 "data_offset": 2048, 00:28:45.892 "data_size": 63488 00:28:45.892 }, 00:28:45.892 { 00:28:45.892 "name": "BaseBdev2", 00:28:45.892 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:45.892 "is_configured": true, 00:28:45.892 "data_offset": 2048, 00:28:45.892 "data_size": 63488 00:28:45.892 } 00:28:45.892 ] 00:28:45.892 }' 00:28:45.892 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.892 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:46.460 "name": "raid_bdev1", 00:28:46.460 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:46.460 "strip_size_kb": 0, 00:28:46.460 "state": "online", 00:28:46.460 "raid_level": "raid1", 00:28:46.460 "superblock": true, 00:28:46.460 "num_base_bdevs": 2, 00:28:46.460 "num_base_bdevs_discovered": 2, 00:28:46.460 "num_base_bdevs_operational": 2, 00:28:46.460 "base_bdevs_list": [ 00:28:46.460 { 00:28:46.460 "name": "spare", 00:28:46.460 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:46.460 "is_configured": true, 00:28:46.460 "data_offset": 2048, 00:28:46.460 "data_size": 63488 00:28:46.460 }, 00:28:46.460 { 00:28:46.460 "name": "BaseBdev2", 00:28:46.460 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:46.460 "is_configured": true, 00:28:46.460 "data_offset": 2048, 00:28:46.460 "data_size": 63488 00:28:46.460 } 00:28:46.460 ] 00:28:46.460 }' 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.460 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:46.461 18:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.461 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:46.461 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.461 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:46.720 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.720 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:46.980 [2024-07-25 18:55:47.466902] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.980 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.240 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.240 "name": "raid_bdev1", 00:28:47.240 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:47.240 "strip_size_kb": 0, 00:28:47.240 "state": "online", 00:28:47.240 "raid_level": "raid1", 00:28:47.240 "superblock": true, 00:28:47.240 "num_base_bdevs": 2, 00:28:47.240 "num_base_bdevs_discovered": 1, 00:28:47.240 "num_base_bdevs_operational": 1, 00:28:47.240 "base_bdevs_list": [ 00:28:47.240 { 00:28:47.240 "name": null, 00:28:47.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.240 "is_configured": false, 00:28:47.240 "data_offset": 2048, 00:28:47.240 "data_size": 63488 00:28:47.240 }, 00:28:47.240 { 00:28:47.240 "name": "BaseBdev2", 00:28:47.240 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:47.240 "is_configured": true, 00:28:47.240 "data_offset": 2048, 00:28:47.240 "data_size": 63488 00:28:47.240 } 00:28:47.240 ] 00:28:47.240 }' 00:28:47.240 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.240 18:55:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.809 18:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:48.069 [2024-07-25 18:55:48.451250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:48.069 [2024-07-25 18:55:48.451499] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:48.069 [2024-07-25 18:55:48.451513] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:48.069 [2024-07-25 18:55:48.451595] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:48.069 [2024-07-25 18:55:48.471330] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b340 00:28:48.069 [2024-07-25 18:55:48.473601] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:48.069 18:55:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:49.023 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.023 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:49.023 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:49.023 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:49.023 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:49.023 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.023 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.284 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.284 "name": "raid_bdev1", 00:28:49.284 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:49.284 "strip_size_kb": 0, 00:28:49.284 "state": "online", 00:28:49.284 "raid_level": "raid1", 00:28:49.284 "superblock": true, 00:28:49.284 "num_base_bdevs": 2, 00:28:49.284 "num_base_bdevs_discovered": 2, 00:28:49.284 "num_base_bdevs_operational": 2, 00:28:49.284 "process": { 00:28:49.284 "type": "rebuild", 00:28:49.284 "target": "spare", 00:28:49.284 "progress": { 00:28:49.284 "blocks": 24576, 00:28:49.284 "percent": 38 00:28:49.284 } 00:28:49.284 }, 00:28:49.284 "base_bdevs_list": [ 00:28:49.284 { 00:28:49.284 "name": "spare", 00:28:49.284 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:49.284 "is_configured": true, 00:28:49.284 "data_offset": 2048, 00:28:49.284 "data_size": 63488 00:28:49.284 }, 00:28:49.284 { 00:28:49.284 "name": "BaseBdev2", 00:28:49.284 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:49.284 "is_configured": true, 00:28:49.284 "data_offset": 2048, 00:28:49.284 "data_size": 63488 00:28:49.284 } 00:28:49.284 ] 00:28:49.284 }' 00:28:49.284 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:49.284 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:49.284 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.284 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.284 18:55:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:49.546 [2024-07-25 18:55:50.047443] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.546 [2024-07-25 18:55:50.087283] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:49.546 [2024-07-25 18:55:50.087370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.546 [2024-07-25 18:55:50.087387] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.546 [2024-07-25 18:55:50.087394] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:49.884 "name": "raid_bdev1", 00:28:49.884 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:49.884 "strip_size_kb": 0, 00:28:49.884 "state": "online", 00:28:49.884 "raid_level": "raid1", 00:28:49.884 "superblock": true, 00:28:49.884 "num_base_bdevs": 2, 00:28:49.884 "num_base_bdevs_discovered": 1, 00:28:49.884 "num_base_bdevs_operational": 1, 00:28:49.884 "base_bdevs_list": [ 00:28:49.884 { 00:28:49.884 "name": null, 00:28:49.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.884 "is_configured": false, 00:28:49.884 "data_offset": 2048, 00:28:49.884 "data_size": 63488 00:28:49.884 }, 00:28:49.884 { 00:28:49.884 "name": "BaseBdev2", 00:28:49.884 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:49.884 "is_configured": true, 00:28:49.884 "data_offset": 2048, 00:28:49.884 "data_size": 63488 00:28:49.884 } 00:28:49.884 ] 00:28:49.884 }' 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:49.884 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.452 18:55:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:50.712 [2024-07-25 18:55:51.173684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:50.712 [2024-07-25 18:55:51.173814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:50.712 [2024-07-25 18:55:51.173856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:50.712 [2024-07-25 18:55:51.173886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:50.712 [2024-07-25 18:55:51.174493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:50.712 [2024-07-25 18:55:51.174542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:50.712 [2024-07-25 18:55:51.174669] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:50.712 [2024-07-25 18:55:51.174684] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:50.712 [2024-07-25 18:55:51.174693] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:50.712 [2024-07-25 18:55:51.174731] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:50.712 [2024-07-25 18:55:51.194207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:28:50.712 spare 00:28:50.712 [2024-07-25 18:55:51.196454] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:50.712 18:55:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:51.650 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:51.650 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:51.650 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:51.650 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:51.650 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:51.650 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.650 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.909 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.909 "name": "raid_bdev1", 00:28:51.909 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:51.909 "strip_size_kb": 0, 00:28:51.909 "state": "online", 00:28:51.909 "raid_level": "raid1", 00:28:51.909 "superblock": true, 00:28:51.909 "num_base_bdevs": 2, 00:28:51.909 "num_base_bdevs_discovered": 2, 00:28:51.909 "num_base_bdevs_operational": 2, 00:28:51.909 "process": { 00:28:51.909 "type": "rebuild", 00:28:51.909 "target": "spare", 00:28:51.909 "progress": { 00:28:51.909 "blocks": 24576, 00:28:51.909 "percent": 38 00:28:51.909 } 00:28:51.909 }, 00:28:51.909 "base_bdevs_list": [ 00:28:51.909 { 00:28:51.909 "name": "spare", 00:28:51.909 "uuid": "fcba03d2-68bc-57e7-b99d-6c00c545b49d", 00:28:51.909 "is_configured": true, 00:28:51.909 "data_offset": 2048, 00:28:51.909 "data_size": 63488 00:28:51.909 }, 00:28:51.909 { 00:28:51.909 "name": "BaseBdev2", 00:28:51.909 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:51.909 "is_configured": true, 00:28:51.909 "data_offset": 2048, 00:28:51.909 "data_size": 63488 00:28:51.909 } 00:28:51.909 ] 00:28:51.909 }' 00:28:51.909 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:52.168 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.168 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:52.168 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.168 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:52.168 [2024-07-25 18:55:52.725712] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:52.428 [2024-07-25 18:55:52.808526] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:52.428 [2024-07-25 18:55:52.808635] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:52.428 [2024-07-25 18:55:52.808652] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:52.428 [2024-07-25 18:55:52.808661] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.428 18:55:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.687 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:52.688 "name": "raid_bdev1", 00:28:52.688 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:52.688 "strip_size_kb": 0, 00:28:52.688 "state": "online", 00:28:52.688 "raid_level": "raid1", 00:28:52.688 "superblock": true, 00:28:52.688 "num_base_bdevs": 2, 00:28:52.688 "num_base_bdevs_discovered": 1, 00:28:52.688 "num_base_bdevs_operational": 1, 00:28:52.688 "base_bdevs_list": [ 00:28:52.688 { 00:28:52.688 "name": null, 00:28:52.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.688 "is_configured": false, 00:28:52.688 "data_offset": 2048, 00:28:52.688 "data_size": 63488 00:28:52.688 }, 00:28:52.688 { 00:28:52.688 "name": "BaseBdev2", 00:28:52.688 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:52.688 "is_configured": true, 00:28:52.688 "data_offset": 2048, 00:28:52.688 "data_size": 63488 00:28:52.688 } 00:28:52.688 ] 00:28:52.688 }' 00:28:52.688 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:52.688 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:53.256 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:53.256 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.256 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:53.256 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:53.256 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.256 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.256 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.514 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.514 "name": "raid_bdev1", 00:28:53.514 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:53.514 "strip_size_kb": 0, 00:28:53.514 "state": "online", 00:28:53.514 "raid_level": "raid1", 00:28:53.514 "superblock": true, 00:28:53.514 "num_base_bdevs": 2, 00:28:53.514 "num_base_bdevs_discovered": 1, 00:28:53.514 "num_base_bdevs_operational": 1, 00:28:53.514 "base_bdevs_list": [ 00:28:53.514 { 00:28:53.514 "name": null, 00:28:53.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.514 "is_configured": false, 00:28:53.514 "data_offset": 2048, 00:28:53.514 "data_size": 63488 00:28:53.514 }, 00:28:53.514 { 00:28:53.514 "name": "BaseBdev2", 00:28:53.514 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:53.514 "is_configured": true, 00:28:53.514 "data_offset": 2048, 00:28:53.514 "data_size": 63488 00:28:53.514 } 00:28:53.514 ] 00:28:53.514 }' 00:28:53.514 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.514 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:53.515 18:55:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.515 18:55:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:53.515 18:55:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:53.773 18:55:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:54.033 [2024-07-25 18:55:54.467675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:54.033 [2024-07-25 18:55:54.467807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.033 [2024-07-25 18:55:54.467859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:54.033 [2024-07-25 18:55:54.467883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.033 [2024-07-25 18:55:54.468420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.033 [2024-07-25 18:55:54.468459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:54.033 [2024-07-25 18:55:54.468628] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:54.033 [2024-07-25 18:55:54.468641] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:54.033 [2024-07-25 18:55:54.468649] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:54.033 BaseBdev1 00:28:54.033 18:55:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.969 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.227 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:55.227 "name": "raid_bdev1", 00:28:55.227 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:55.227 "strip_size_kb": 0, 00:28:55.227 "state": "online", 00:28:55.227 "raid_level": "raid1", 00:28:55.227 "superblock": true, 00:28:55.227 "num_base_bdevs": 2, 00:28:55.227 "num_base_bdevs_discovered": 1, 00:28:55.227 "num_base_bdevs_operational": 1, 00:28:55.227 "base_bdevs_list": [ 00:28:55.227 { 00:28:55.227 "name": null, 00:28:55.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.227 "is_configured": false, 00:28:55.227 "data_offset": 2048, 00:28:55.227 "data_size": 63488 00:28:55.227 }, 00:28:55.227 { 00:28:55.227 "name": "BaseBdev2", 00:28:55.227 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:55.227 "is_configured": true, 00:28:55.227 "data_offset": 2048, 00:28:55.227 "data_size": 63488 00:28:55.227 } 00:28:55.227 ] 00:28:55.227 }' 00:28:55.227 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:55.227 18:55:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:55.794 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:55.794 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:55.794 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:55.794 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:55.794 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:55.794 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.794 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.053 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.053 "name": "raid_bdev1", 00:28:56.053 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:56.053 "strip_size_kb": 0, 00:28:56.053 "state": "online", 00:28:56.053 "raid_level": "raid1", 00:28:56.053 "superblock": true, 00:28:56.053 "num_base_bdevs": 2, 00:28:56.053 "num_base_bdevs_discovered": 1, 00:28:56.053 "num_base_bdevs_operational": 1, 00:28:56.053 "base_bdevs_list": [ 00:28:56.053 { 00:28:56.053 "name": null, 00:28:56.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.053 "is_configured": false, 00:28:56.053 "data_offset": 2048, 00:28:56.053 "data_size": 63488 00:28:56.053 }, 00:28:56.053 { 00:28:56.053 "name": "BaseBdev2", 00:28:56.053 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:56.053 "is_configured": true, 00:28:56.053 "data_offset": 2048, 00:28:56.053 "data_size": 63488 00:28:56.053 } 00:28:56.053 ] 00:28:56.053 }' 00:28:56.053 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:56.313 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:56.573 [2024-07-25 18:55:56.942601] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:56.573 [2024-07-25 18:55:56.942820] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:56.573 [2024-07-25 18:55:56.942833] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:56.573 request: 00:28:56.573 { 00:28:56.573 "base_bdev": "BaseBdev1", 00:28:56.573 "raid_bdev": "raid_bdev1", 00:28:56.573 "method": "bdev_raid_add_base_bdev", 00:28:56.573 "req_id": 1 00:28:56.573 } 00:28:56.573 Got JSON-RPC error response 00:28:56.573 response: 00:28:56.573 { 00:28:56.573 "code": -22, 00:28:56.573 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:56.573 } 00:28:56.573 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:28:56.573 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:56.573 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:56.573 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:56.573 18:55:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.512 18:55:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.772 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.772 "name": "raid_bdev1", 00:28:57.772 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:57.772 "strip_size_kb": 0, 00:28:57.772 "state": "online", 00:28:57.772 "raid_level": "raid1", 00:28:57.772 "superblock": true, 00:28:57.772 "num_base_bdevs": 2, 00:28:57.772 "num_base_bdevs_discovered": 1, 00:28:57.772 "num_base_bdevs_operational": 1, 00:28:57.772 "base_bdevs_list": [ 00:28:57.772 { 00:28:57.772 "name": null, 00:28:57.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.772 "is_configured": false, 00:28:57.772 "data_offset": 2048, 00:28:57.772 "data_size": 63488 00:28:57.772 }, 00:28:57.772 { 00:28:57.772 "name": "BaseBdev2", 00:28:57.772 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:57.772 "is_configured": true, 00:28:57.772 "data_offset": 2048, 00:28:57.772 "data_size": 63488 00:28:57.772 } 00:28:57.772 ] 00:28:57.772 }' 00:28:57.772 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.772 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:58.341 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:58.341 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:58.341 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:58.341 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:58.341 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:58.341 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.341 18:55:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.601 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:58.601 "name": "raid_bdev1", 00:28:58.601 "uuid": "fa2c08a2-1c8d-439f-a8d0-19ef91cead80", 00:28:58.601 "strip_size_kb": 0, 00:28:58.601 "state": "online", 00:28:58.601 "raid_level": "raid1", 00:28:58.601 "superblock": true, 00:28:58.601 "num_base_bdevs": 2, 00:28:58.601 "num_base_bdevs_discovered": 1, 00:28:58.601 "num_base_bdevs_operational": 1, 00:28:58.602 "base_bdevs_list": [ 00:28:58.602 { 00:28:58.602 "name": null, 00:28:58.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.602 "is_configured": false, 00:28:58.602 "data_offset": 2048, 00:28:58.602 "data_size": 63488 00:28:58.602 }, 00:28:58.602 { 00:28:58.602 "name": "BaseBdev2", 00:28:58.602 "uuid": "b6c88333-9bad-53e0-a8f0-b3f710c1cff2", 00:28:58.602 "is_configured": true, 00:28:58.602 "data_offset": 2048, 00:28:58.602 "data_size": 63488 00:28:58.602 } 00:28:58.602 ] 00:28:58.602 }' 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 145333 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 145333 ']' 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 145333 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145333 00:28:58.602 killing process with pid 145333 00:28:58.602 Received shutdown signal, test time was about 26.431617 seconds 00:28:58.602 00:28:58.602 Latency(us) 00:28:58.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.602 =================================================================================================================== 00:28:58.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145333' 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 145333 00:28:58.602 18:55:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 145333 00:28:58.602 [2024-07-25 18:55:59.148286] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:58.602 [2024-07-25 18:55:59.148467] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:58.602 [2024-07-25 18:55:59.148537] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:58.602 [2024-07-25 18:55:59.148551] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:28:58.861 [2024-07-25 18:55:59.407066] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:00.765 ************************************ 00:29:00.765 END TEST raid_rebuild_test_sb_io 00:29:00.765 ************************************ 00:29:00.765 18:56:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:29:00.765 00:29:00.765 real 0m32.162s 00:29:00.765 user 0m48.611s 00:29:00.765 sys 0m4.454s 00:29:00.765 18:56:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.765 18:56:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.765 18:56:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:29:00.765 18:56:01 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:00.765 18:56:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:00.765 18:56:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.765 18:56:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:00.765 ************************************ 00:29:00.765 START TEST raid_rebuild_test 00:29:00.765 ************************************ 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev4 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=146210 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 146210 /var/tmp/spdk-raid.sock 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 146210 ']' 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:00.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.765 18:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.765 [2024-07-25 18:56:01.157554] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:00.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:00.765 Zero copy mechanism will not be used. 00:29:00.765 [2024-07-25 18:56:01.157723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146210 ] 00:29:00.765 [2024-07-25 18:56:01.320366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.024 [2024-07-25 18:56:01.585125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.591 [2024-07-25 18:56:01.862546] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:01.591 18:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:01.591 18:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:29:01.591 18:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:01.591 18:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:02.158 BaseBdev1_malloc 00:29:02.158 18:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:02.158 [2024-07-25 18:56:02.615864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:02.158 [2024-07-25 18:56:02.616004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.158 [2024-07-25 18:56:02.616054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:02.158 [2024-07-25 18:56:02.616078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.158 [2024-07-25 18:56:02.618828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.158 [2024-07-25 18:56:02.618881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:02.158 BaseBdev1 00:29:02.158 18:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:02.158 18:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:02.416 BaseBdev2_malloc 00:29:02.416 18:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:02.675 [2024-07-25 18:56:03.212286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:02.675 [2024-07-25 18:56:03.212439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.675 [2024-07-25 18:56:03.212483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:02.675 [2024-07-25 18:56:03.212507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.675 [2024-07-25 18:56:03.215218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.675 [2024-07-25 18:56:03.215272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:02.675 BaseBdev2 00:29:02.675 18:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:02.675 18:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:02.936 BaseBdev3_malloc 00:29:02.936 18:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:03.196 [2024-07-25 18:56:03.655230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:03.196 [2024-07-25 18:56:03.655369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.196 [2024-07-25 18:56:03.655412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:03.196 [2024-07-25 18:56:03.655441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.196 [2024-07-25 18:56:03.658119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.196 [2024-07-25 18:56:03.658177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:03.196 BaseBdev3 00:29:03.196 18:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:03.197 18:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:03.455 BaseBdev4_malloc 00:29:03.455 18:56:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:03.713 [2024-07-25 18:56:04.149830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:03.713 [2024-07-25 18:56:04.149962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.713 [2024-07-25 18:56:04.150007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:03.713 [2024-07-25 18:56:04.150035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.713 [2024-07-25 18:56:04.152699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.713 [2024-07-25 18:56:04.152755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:03.713 BaseBdev4 00:29:03.713 18:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:03.972 spare_malloc 00:29:03.972 18:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:04.229 spare_delay 00:29:04.229 18:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:04.488 [2024-07-25 18:56:04.830566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:04.488 [2024-07-25 18:56:04.830698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.488 [2024-07-25 18:56:04.830737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:04.488 [2024-07-25 18:56:04.830773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.488 [2024-07-25 18:56:04.833542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.488 [2024-07-25 18:56:04.833600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:04.488 spare 00:29:04.488 18:56:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:04.488 [2024-07-25 18:56:05.018619] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:04.488 [2024-07-25 18:56:05.020886] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:04.488 [2024-07-25 18:56:05.020957] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:04.488 [2024-07-25 18:56:05.021007] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:04.488 [2024-07-25 18:56:05.021104] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:29:04.488 [2024-07-25 18:56:05.021113] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:04.488 [2024-07-25 18:56:05.021262] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:04.488 [2024-07-25 18:56:05.021633] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:29:04.488 [2024-07-25 18:56:05.021651] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:29:04.488 [2024-07-25 18:56:05.021862] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.488 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.746 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.746 "name": "raid_bdev1", 00:29:04.746 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:04.746 "strip_size_kb": 0, 00:29:04.746 "state": "online", 00:29:04.746 "raid_level": "raid1", 00:29:04.746 "superblock": false, 00:29:04.746 "num_base_bdevs": 4, 00:29:04.746 "num_base_bdevs_discovered": 4, 00:29:04.746 "num_base_bdevs_operational": 4, 00:29:04.746 "base_bdevs_list": [ 00:29:04.746 { 00:29:04.746 "name": "BaseBdev1", 00:29:04.746 "uuid": "71a1c35f-081c-5ac1-a250-fd8e65ed22e5", 00:29:04.746 "is_configured": true, 00:29:04.746 "data_offset": 0, 00:29:04.746 "data_size": 65536 00:29:04.746 }, 00:29:04.746 { 00:29:04.746 "name": "BaseBdev2", 00:29:04.746 "uuid": "b707424a-4aa3-5ed4-a928-1c44d6031998", 00:29:04.746 "is_configured": true, 00:29:04.746 "data_offset": 0, 00:29:04.747 "data_size": 65536 00:29:04.747 }, 00:29:04.747 { 00:29:04.747 "name": "BaseBdev3", 00:29:04.747 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:04.747 "is_configured": true, 00:29:04.747 "data_offset": 0, 00:29:04.747 "data_size": 65536 00:29:04.747 }, 00:29:04.747 { 00:29:04.747 "name": "BaseBdev4", 00:29:04.747 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:04.747 "is_configured": true, 00:29:04.747 "data_offset": 0, 00:29:04.747 "data_size": 65536 00:29:04.747 } 00:29:04.747 ] 00:29:04.747 }' 00:29:04.747 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.747 18:56:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:05.314 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:05.314 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:05.572 [2024-07-25 18:56:05.971034] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:05.572 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:29:05.572 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.572 18:56:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:05.831 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:05.831 [2024-07-25 18:56:06.394936] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:06.089 /dev/nbd0 00:29:06.089 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:06.089 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:06.089 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:06.089 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:06.089 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:06.089 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:06.090 1+0 records in 00:29:06.090 1+0 records out 00:29:06.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246097 s, 16.6 MB/s 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:06.090 18:56:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:12.670 65536+0 records in 00:29:12.670 65536+0 records out 00:29:12.670 33554432 bytes (34 MB, 32 MiB) copied, 5.46398 s, 6.1 MB/s 00:29:12.670 18:56:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:12.670 18:56:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:12.670 18:56:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:12.670 18:56:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:12.670 18:56:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:12.670 18:56:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:12.670 18:56:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:12.670 [2024-07-25 18:56:12.222540] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:12.670 [2024-07-25 18:56:12.438318] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:12.670 "name": "raid_bdev1", 00:29:12.670 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:12.670 "strip_size_kb": 0, 00:29:12.670 "state": "online", 00:29:12.670 "raid_level": "raid1", 00:29:12.670 "superblock": false, 00:29:12.670 "num_base_bdevs": 4, 00:29:12.670 "num_base_bdevs_discovered": 3, 00:29:12.670 "num_base_bdevs_operational": 3, 00:29:12.670 "base_bdevs_list": [ 00:29:12.670 { 00:29:12.670 "name": null, 00:29:12.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.670 "is_configured": false, 00:29:12.670 "data_offset": 0, 00:29:12.670 "data_size": 65536 00:29:12.670 }, 00:29:12.670 { 00:29:12.670 "name": "BaseBdev2", 00:29:12.670 "uuid": "b707424a-4aa3-5ed4-a928-1c44d6031998", 00:29:12.670 "is_configured": true, 00:29:12.670 "data_offset": 0, 00:29:12.670 "data_size": 65536 00:29:12.670 }, 00:29:12.670 { 00:29:12.670 "name": "BaseBdev3", 00:29:12.670 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:12.670 "is_configured": true, 00:29:12.670 "data_offset": 0, 00:29:12.670 "data_size": 65536 00:29:12.670 }, 00:29:12.670 { 00:29:12.670 "name": "BaseBdev4", 00:29:12.670 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:12.670 "is_configured": true, 00:29:12.670 "data_offset": 0, 00:29:12.670 "data_size": 65536 00:29:12.670 } 00:29:12.670 ] 00:29:12.670 }' 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:12.670 18:56:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.961 18:56:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:12.961 [2024-07-25 18:56:13.499706] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:12.961 [2024-07-25 18:56:13.517345] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:29:12.961 [2024-07-25 18:56:13.519680] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:12.961 18:56:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:14.334 "name": "raid_bdev1", 00:29:14.334 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:14.334 "strip_size_kb": 0, 00:29:14.334 "state": "online", 00:29:14.334 "raid_level": "raid1", 00:29:14.334 "superblock": false, 00:29:14.334 "num_base_bdevs": 4, 00:29:14.334 "num_base_bdevs_discovered": 4, 00:29:14.334 "num_base_bdevs_operational": 4, 00:29:14.334 "process": { 00:29:14.334 "type": "rebuild", 00:29:14.334 "target": "spare", 00:29:14.334 "progress": { 00:29:14.334 "blocks": 22528, 00:29:14.334 "percent": 34 00:29:14.334 } 00:29:14.334 }, 00:29:14.334 "base_bdevs_list": [ 00:29:14.334 { 00:29:14.334 "name": "spare", 00:29:14.334 "uuid": "55015c86-3056-53e9-853e-2424f50c3192", 00:29:14.334 "is_configured": true, 00:29:14.334 "data_offset": 0, 00:29:14.334 "data_size": 65536 00:29:14.334 }, 00:29:14.334 { 00:29:14.334 "name": "BaseBdev2", 00:29:14.334 "uuid": "b707424a-4aa3-5ed4-a928-1c44d6031998", 00:29:14.334 "is_configured": true, 00:29:14.334 "data_offset": 0, 00:29:14.334 "data_size": 65536 00:29:14.334 }, 00:29:14.334 { 00:29:14.334 "name": "BaseBdev3", 00:29:14.334 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:14.334 "is_configured": true, 00:29:14.334 "data_offset": 0, 00:29:14.334 "data_size": 65536 00:29:14.334 }, 00:29:14.334 { 00:29:14.334 "name": "BaseBdev4", 00:29:14.334 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:14.334 "is_configured": true, 00:29:14.334 "data_offset": 0, 00:29:14.334 "data_size": 65536 00:29:14.334 } 00:29:14.334 ] 00:29:14.334 }' 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:14.334 18:56:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:14.592 [2024-07-25 18:56:15.045300] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:14.592 [2024-07-25 18:56:15.131372] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:14.592 [2024-07-25 18:56:15.131491] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.592 [2024-07-25 18:56:15.131512] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:14.592 [2024-07-25 18:56:15.131520] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.850 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.108 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:15.108 "name": "raid_bdev1", 00:29:15.108 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:15.108 "strip_size_kb": 0, 00:29:15.108 "state": "online", 00:29:15.108 "raid_level": "raid1", 00:29:15.108 "superblock": false, 00:29:15.108 "num_base_bdevs": 4, 00:29:15.108 "num_base_bdevs_discovered": 3, 00:29:15.108 "num_base_bdevs_operational": 3, 00:29:15.108 "base_bdevs_list": [ 00:29:15.108 { 00:29:15.108 "name": null, 00:29:15.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.108 "is_configured": false, 00:29:15.108 "data_offset": 0, 00:29:15.108 "data_size": 65536 00:29:15.108 }, 00:29:15.108 { 00:29:15.108 "name": "BaseBdev2", 00:29:15.108 "uuid": "b707424a-4aa3-5ed4-a928-1c44d6031998", 00:29:15.108 "is_configured": true, 00:29:15.108 "data_offset": 0, 00:29:15.108 "data_size": 65536 00:29:15.108 }, 00:29:15.108 { 00:29:15.108 "name": "BaseBdev3", 00:29:15.108 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:15.108 "is_configured": true, 00:29:15.108 "data_offset": 0, 00:29:15.108 "data_size": 65536 00:29:15.108 }, 00:29:15.108 { 00:29:15.108 "name": "BaseBdev4", 00:29:15.108 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:15.108 "is_configured": true, 00:29:15.108 "data_offset": 0, 00:29:15.108 "data_size": 65536 00:29:15.108 } 00:29:15.108 ] 00:29:15.108 }' 00:29:15.108 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:15.108 18:56:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.367 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:15.367 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:15.367 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:15.367 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:15.367 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:15.367 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.367 18:56:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.625 18:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.625 "name": "raid_bdev1", 00:29:15.625 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:15.625 "strip_size_kb": 0, 00:29:15.625 "state": "online", 00:29:15.625 "raid_level": "raid1", 00:29:15.625 "superblock": false, 00:29:15.625 "num_base_bdevs": 4, 00:29:15.625 "num_base_bdevs_discovered": 3, 00:29:15.625 "num_base_bdevs_operational": 3, 00:29:15.625 "base_bdevs_list": [ 00:29:15.625 { 00:29:15.625 "name": null, 00:29:15.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.625 "is_configured": false, 00:29:15.625 "data_offset": 0, 00:29:15.625 "data_size": 65536 00:29:15.625 }, 00:29:15.625 { 00:29:15.625 "name": "BaseBdev2", 00:29:15.625 "uuid": "b707424a-4aa3-5ed4-a928-1c44d6031998", 00:29:15.625 "is_configured": true, 00:29:15.625 "data_offset": 0, 00:29:15.625 "data_size": 65536 00:29:15.625 }, 00:29:15.625 { 00:29:15.625 "name": "BaseBdev3", 00:29:15.625 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:15.625 "is_configured": true, 00:29:15.625 "data_offset": 0, 00:29:15.625 "data_size": 65536 00:29:15.625 }, 00:29:15.625 { 00:29:15.625 "name": "BaseBdev4", 00:29:15.625 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:15.625 "is_configured": true, 00:29:15.625 "data_offset": 0, 00:29:15.625 "data_size": 65536 00:29:15.625 } 00:29:15.625 ] 00:29:15.625 }' 00:29:15.625 18:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:15.883 18:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:15.883 18:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:15.883 18:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:15.883 18:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:15.883 [2024-07-25 18:56:16.425633] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:15.883 [2024-07-25 18:56:16.442317] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:29:15.883 [2024-07-25 18:56:16.444648] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:15.883 18:56:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.258 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:17.258 "name": "raid_bdev1", 00:29:17.258 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:17.258 "strip_size_kb": 0, 00:29:17.258 "state": "online", 00:29:17.258 "raid_level": "raid1", 00:29:17.258 "superblock": false, 00:29:17.258 "num_base_bdevs": 4, 00:29:17.258 "num_base_bdevs_discovered": 4, 00:29:17.258 "num_base_bdevs_operational": 4, 00:29:17.258 "process": { 00:29:17.258 "type": "rebuild", 00:29:17.258 "target": "spare", 00:29:17.258 "progress": { 00:29:17.258 "blocks": 22528, 00:29:17.258 "percent": 34 00:29:17.258 } 00:29:17.258 }, 00:29:17.258 "base_bdevs_list": [ 00:29:17.258 { 00:29:17.258 "name": "spare", 00:29:17.258 "uuid": "55015c86-3056-53e9-853e-2424f50c3192", 00:29:17.258 "is_configured": true, 00:29:17.258 "data_offset": 0, 00:29:17.258 "data_size": 65536 00:29:17.258 }, 00:29:17.258 { 00:29:17.258 "name": "BaseBdev2", 00:29:17.258 "uuid": "b707424a-4aa3-5ed4-a928-1c44d6031998", 00:29:17.258 "is_configured": true, 00:29:17.258 "data_offset": 0, 00:29:17.259 "data_size": 65536 00:29:17.259 }, 00:29:17.259 { 00:29:17.259 "name": "BaseBdev3", 00:29:17.259 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:17.259 "is_configured": true, 00:29:17.259 "data_offset": 0, 00:29:17.259 "data_size": 65536 00:29:17.259 }, 00:29:17.259 { 00:29:17.259 "name": "BaseBdev4", 00:29:17.259 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:17.259 "is_configured": true, 00:29:17.259 "data_offset": 0, 00:29:17.259 "data_size": 65536 00:29:17.259 } 00:29:17.259 ] 00:29:17.259 }' 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:29:17.259 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:17.517 [2024-07-25 18:56:17.891075] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:17.517 [2024-07-25 18:56:17.956527] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:29:17.517 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:29:17.517 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:29:17.517 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.518 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.518 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:17.518 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:17.518 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.518 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.518 18:56:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:17.776 "name": "raid_bdev1", 00:29:17.776 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:17.776 "strip_size_kb": 0, 00:29:17.776 "state": "online", 00:29:17.776 "raid_level": "raid1", 00:29:17.776 "superblock": false, 00:29:17.776 "num_base_bdevs": 4, 00:29:17.776 "num_base_bdevs_discovered": 3, 00:29:17.776 "num_base_bdevs_operational": 3, 00:29:17.776 "process": { 00:29:17.776 "type": "rebuild", 00:29:17.776 "target": "spare", 00:29:17.776 "progress": { 00:29:17.776 "blocks": 32768, 00:29:17.776 "percent": 50 00:29:17.776 } 00:29:17.776 }, 00:29:17.776 "base_bdevs_list": [ 00:29:17.776 { 00:29:17.776 "name": "spare", 00:29:17.776 "uuid": "55015c86-3056-53e9-853e-2424f50c3192", 00:29:17.776 "is_configured": true, 00:29:17.776 "data_offset": 0, 00:29:17.776 "data_size": 65536 00:29:17.776 }, 00:29:17.776 { 00:29:17.776 "name": null, 00:29:17.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.776 "is_configured": false, 00:29:17.776 "data_offset": 0, 00:29:17.776 "data_size": 65536 00:29:17.776 }, 00:29:17.776 { 00:29:17.776 "name": "BaseBdev3", 00:29:17.776 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:17.776 "is_configured": true, 00:29:17.776 "data_offset": 0, 00:29:17.776 "data_size": 65536 00:29:17.776 }, 00:29:17.776 { 00:29:17.776 "name": "BaseBdev4", 00:29:17.776 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:17.776 "is_configured": true, 00:29:17.776 "data_offset": 0, 00:29:17.776 "data_size": 65536 00:29:17.776 } 00:29:17.776 ] 00:29:17.776 }' 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=915 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.776 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.034 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:18.034 "name": "raid_bdev1", 00:29:18.034 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:18.034 "strip_size_kb": 0, 00:29:18.034 "state": "online", 00:29:18.034 "raid_level": "raid1", 00:29:18.034 "superblock": false, 00:29:18.034 "num_base_bdevs": 4, 00:29:18.034 "num_base_bdevs_discovered": 3, 00:29:18.034 "num_base_bdevs_operational": 3, 00:29:18.034 "process": { 00:29:18.034 "type": "rebuild", 00:29:18.034 "target": "spare", 00:29:18.035 "progress": { 00:29:18.035 "blocks": 40960, 00:29:18.035 "percent": 62 00:29:18.035 } 00:29:18.035 }, 00:29:18.035 "base_bdevs_list": [ 00:29:18.035 { 00:29:18.035 "name": "spare", 00:29:18.035 "uuid": "55015c86-3056-53e9-853e-2424f50c3192", 00:29:18.035 "is_configured": true, 00:29:18.035 "data_offset": 0, 00:29:18.035 "data_size": 65536 00:29:18.035 }, 00:29:18.035 { 00:29:18.035 "name": null, 00:29:18.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.035 "is_configured": false, 00:29:18.035 "data_offset": 0, 00:29:18.035 "data_size": 65536 00:29:18.035 }, 00:29:18.035 { 00:29:18.035 "name": "BaseBdev3", 00:29:18.035 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:18.035 "is_configured": true, 00:29:18.035 "data_offset": 0, 00:29:18.035 "data_size": 65536 00:29:18.035 }, 00:29:18.035 { 00:29:18.035 "name": "BaseBdev4", 00:29:18.035 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:18.035 "is_configured": true, 00:29:18.035 "data_offset": 0, 00:29:18.035 "data_size": 65536 00:29:18.035 } 00:29:18.035 ] 00:29:18.035 }' 00:29:18.035 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:18.035 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:18.035 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:18.035 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:18.035 18:56:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.409 [2024-07-25 18:56:19.668528] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:19.409 [2024-07-25 18:56:19.668606] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:19.409 [2024-07-25 18:56:19.668690] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.409 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.409 "name": "raid_bdev1", 00:29:19.409 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:19.409 "strip_size_kb": 0, 00:29:19.409 "state": "online", 00:29:19.409 "raid_level": "raid1", 00:29:19.409 "superblock": false, 00:29:19.409 "num_base_bdevs": 4, 00:29:19.410 "num_base_bdevs_discovered": 3, 00:29:19.410 "num_base_bdevs_operational": 3, 00:29:19.410 "base_bdevs_list": [ 00:29:19.410 { 00:29:19.410 "name": "spare", 00:29:19.410 "uuid": "55015c86-3056-53e9-853e-2424f50c3192", 00:29:19.410 "is_configured": true, 00:29:19.410 "data_offset": 0, 00:29:19.410 "data_size": 65536 00:29:19.410 }, 00:29:19.410 { 00:29:19.410 "name": null, 00:29:19.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.410 "is_configured": false, 00:29:19.410 "data_offset": 0, 00:29:19.410 "data_size": 65536 00:29:19.410 }, 00:29:19.410 { 00:29:19.410 "name": "BaseBdev3", 00:29:19.410 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:19.410 "is_configured": true, 00:29:19.410 "data_offset": 0, 00:29:19.410 "data_size": 65536 00:29:19.410 }, 00:29:19.410 { 00:29:19.410 "name": "BaseBdev4", 00:29:19.410 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:19.410 "is_configured": true, 00:29:19.410 "data_offset": 0, 00:29:19.410 "data_size": 65536 00:29:19.410 } 00:29:19.410 ] 00:29:19.410 }' 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.410 18:56:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.668 "name": "raid_bdev1", 00:29:19.668 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:19.668 "strip_size_kb": 0, 00:29:19.668 "state": "online", 00:29:19.668 "raid_level": "raid1", 00:29:19.668 "superblock": false, 00:29:19.668 "num_base_bdevs": 4, 00:29:19.668 "num_base_bdevs_discovered": 3, 00:29:19.668 "num_base_bdevs_operational": 3, 00:29:19.668 "base_bdevs_list": [ 00:29:19.668 { 00:29:19.668 "name": "spare", 00:29:19.668 "uuid": "55015c86-3056-53e9-853e-2424f50c3192", 00:29:19.668 "is_configured": true, 00:29:19.668 "data_offset": 0, 00:29:19.668 "data_size": 65536 00:29:19.668 }, 00:29:19.668 { 00:29:19.668 "name": null, 00:29:19.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.668 "is_configured": false, 00:29:19.668 "data_offset": 0, 00:29:19.668 "data_size": 65536 00:29:19.668 }, 00:29:19.668 { 00:29:19.668 "name": "BaseBdev3", 00:29:19.668 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:19.668 "is_configured": true, 00:29:19.668 "data_offset": 0, 00:29:19.668 "data_size": 65536 00:29:19.668 }, 00:29:19.668 { 00:29:19.668 "name": "BaseBdev4", 00:29:19.668 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:19.668 "is_configured": true, 00:29:19.668 "data_offset": 0, 00:29:19.668 "data_size": 65536 00:29:19.668 } 00:29:19.668 ] 00:29:19.668 }' 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.668 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.927 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:19.927 "name": "raid_bdev1", 00:29:19.927 "uuid": "5508ecaf-73b1-4a1c-afed-b4f93c22e871", 00:29:19.927 "strip_size_kb": 0, 00:29:19.927 "state": "online", 00:29:19.927 "raid_level": "raid1", 00:29:19.927 "superblock": false, 00:29:19.927 "num_base_bdevs": 4, 00:29:19.927 "num_base_bdevs_discovered": 3, 00:29:19.927 "num_base_bdevs_operational": 3, 00:29:19.927 "base_bdevs_list": [ 00:29:19.927 { 00:29:19.927 "name": "spare", 00:29:19.927 "uuid": "55015c86-3056-53e9-853e-2424f50c3192", 00:29:19.927 "is_configured": true, 00:29:19.927 "data_offset": 0, 00:29:19.927 "data_size": 65536 00:29:19.927 }, 00:29:19.927 { 00:29:19.927 "name": null, 00:29:19.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.927 "is_configured": false, 00:29:19.927 "data_offset": 0, 00:29:19.927 "data_size": 65536 00:29:19.927 }, 00:29:19.927 { 00:29:19.927 "name": "BaseBdev3", 00:29:19.927 "uuid": "e7b8f955-0316-50ac-bd2c-511cdb0d906d", 00:29:19.927 "is_configured": true, 00:29:19.927 "data_offset": 0, 00:29:19.927 "data_size": 65536 00:29:19.927 }, 00:29:19.927 { 00:29:19.927 "name": "BaseBdev4", 00:29:19.927 "uuid": "402156d8-f537-57f1-b5bf-dcec472d73bb", 00:29:19.927 "is_configured": true, 00:29:19.927 "data_offset": 0, 00:29:19.927 "data_size": 65536 00:29:19.927 } 00:29:19.927 ] 00:29:19.927 }' 00:29:19.927 18:56:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:19.927 18:56:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.494 18:56:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:20.752 [2024-07-25 18:56:21.314719] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:20.752 [2024-07-25 18:56:21.314762] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:20.752 [2024-07-25 18:56:21.314852] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:20.752 [2024-07-25 18:56:21.314968] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:20.752 [2024-07-25 18:56:21.314980] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.010 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:21.269 /dev/nbd0 00:29:21.269 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:21.269 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:21.269 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:21.269 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:21.269 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:21.269 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.270 1+0 records in 00:29:21.270 1+0 records out 00:29:21.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274383 s, 14.9 MB/s 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.270 18:56:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:21.528 /dev/nbd1 00:29:21.528 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:21.528 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:21.528 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:21.528 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.529 1+0 records in 00:29:21.529 1+0 records out 00:29:21.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317257 s, 12.9 MB/s 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.529 18:56:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:21.787 18:56:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:21.787 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:21.787 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:21.787 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:21.787 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:21.787 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:21.787 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:22.046 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 146210 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 146210 ']' 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 146210 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 146210 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 146210' 00:29:22.305 killing process with pid 146210 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 146210 00:29:22.305 Received shutdown signal, test time was about 60.000000 seconds 00:29:22.305 00:29:22.305 Latency(us) 00:29:22.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.305 =================================================================================================================== 00:29:22.305 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:22.305 [2024-07-25 18:56:22.723469] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:22.305 18:56:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 146210 00:29:22.873 [2024-07-25 18:56:23.262032] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:29:24.246 00:29:24.246 real 0m23.649s 00:29:24.246 user 0m31.500s 00:29:24.246 sys 0m4.553s 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:24.246 ************************************ 00:29:24.246 END TEST raid_rebuild_test 00:29:24.246 ************************************ 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.246 18:56:24 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:29:24.246 18:56:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:24.246 18:56:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:24.246 18:56:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:24.246 ************************************ 00:29:24.246 START TEST raid_rebuild_test_sb 00:29:24.246 ************************************ 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev4 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:24.246 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=146772 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 146772 /var/tmp/spdk-raid.sock 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 146772 ']' 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:24.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.247 18:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.505 [2024-07-25 18:56:24.908504] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:24.505 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:24.505 Zero copy mechanism will not be used. 00:29:24.505 [2024-07-25 18:56:24.908724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146772 ] 00:29:24.763 [2024-07-25 18:56:25.093744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.022 [2024-07-25 18:56:25.355071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.329 [2024-07-25 18:56:25.627334] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:25.329 18:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.329 18:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:25.329 18:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:25.329 18:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:25.588 BaseBdev1_malloc 00:29:25.588 18:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:25.847 [2024-07-25 18:56:26.278850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:25.847 [2024-07-25 18:56:26.278977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:25.847 [2024-07-25 18:56:26.279017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:25.847 [2024-07-25 18:56:26.279046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:25.847 [2024-07-25 18:56:26.281808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:25.847 [2024-07-25 18:56:26.281857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:25.847 BaseBdev1 00:29:25.847 18:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:25.847 18:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:26.106 BaseBdev2_malloc 00:29:26.106 18:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:26.365 [2024-07-25 18:56:26.708000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:26.365 [2024-07-25 18:56:26.708126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.365 [2024-07-25 18:56:26.708168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:26.365 [2024-07-25 18:56:26.708191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.365 [2024-07-25 18:56:26.710872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.365 [2024-07-25 18:56:26.710934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:26.365 BaseBdev2 00:29:26.365 18:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:26.365 18:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:26.365 BaseBdev3_malloc 00:29:26.624 18:56:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:26.624 [2024-07-25 18:56:27.107265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:26.624 [2024-07-25 18:56:27.107386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.624 [2024-07-25 18:56:27.107424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:26.624 [2024-07-25 18:56:27.107453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.624 [2024-07-25 18:56:27.110083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.624 [2024-07-25 18:56:27.110138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:26.624 BaseBdev3 00:29:26.624 18:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:26.624 18:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:26.883 BaseBdev4_malloc 00:29:26.883 18:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:27.141 [2024-07-25 18:56:27.511113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:27.141 [2024-07-25 18:56:27.511223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:27.142 [2024-07-25 18:56:27.511261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:27.142 [2024-07-25 18:56:27.511288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:27.142 [2024-07-25 18:56:27.513960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:27.142 [2024-07-25 18:56:27.514015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:27.142 BaseBdev4 00:29:27.142 18:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:27.400 spare_malloc 00:29:27.400 18:56:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:27.659 spare_delay 00:29:27.659 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:27.659 [2024-07-25 18:56:28.167538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:27.659 [2024-07-25 18:56:28.167662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:27.659 [2024-07-25 18:56:28.167698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:27.659 [2024-07-25 18:56:28.167735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:27.659 [2024-07-25 18:56:28.170464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:27.659 [2024-07-25 18:56:28.170524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:27.659 spare 00:29:27.659 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:27.917 [2024-07-25 18:56:28.343628] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:27.917 [2024-07-25 18:56:28.345945] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:27.917 [2024-07-25 18:56:28.346010] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:27.917 [2024-07-25 18:56:28.346057] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:27.917 [2024-07-25 18:56:28.346234] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:29:27.917 [2024-07-25 18:56:28.346243] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:27.917 [2024-07-25 18:56:28.346410] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:27.917 [2024-07-25 18:56:28.346788] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:29:27.917 [2024-07-25 18:56:28.346806] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:29:27.917 [2024-07-25 18:56:28.346978] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.917 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.176 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.176 "name": "raid_bdev1", 00:29:28.176 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:28.176 "strip_size_kb": 0, 00:29:28.176 "state": "online", 00:29:28.176 "raid_level": "raid1", 00:29:28.176 "superblock": true, 00:29:28.176 "num_base_bdevs": 4, 00:29:28.176 "num_base_bdevs_discovered": 4, 00:29:28.176 "num_base_bdevs_operational": 4, 00:29:28.176 "base_bdevs_list": [ 00:29:28.176 { 00:29:28.176 "name": "BaseBdev1", 00:29:28.176 "uuid": "1a1a41fb-ad31-5a74-8458-1498314f96e2", 00:29:28.176 "is_configured": true, 00:29:28.176 "data_offset": 2048, 00:29:28.176 "data_size": 63488 00:29:28.176 }, 00:29:28.176 { 00:29:28.176 "name": "BaseBdev2", 00:29:28.176 "uuid": "64c22f5b-2565-547b-b93f-edf4ed4d485c", 00:29:28.176 "is_configured": true, 00:29:28.176 "data_offset": 2048, 00:29:28.176 "data_size": 63488 00:29:28.176 }, 00:29:28.176 { 00:29:28.176 "name": "BaseBdev3", 00:29:28.176 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:28.176 "is_configured": true, 00:29:28.176 "data_offset": 2048, 00:29:28.176 "data_size": 63488 00:29:28.176 }, 00:29:28.176 { 00:29:28.176 "name": "BaseBdev4", 00:29:28.176 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:28.176 "is_configured": true, 00:29:28.176 "data_offset": 2048, 00:29:28.176 "data_size": 63488 00:29:28.176 } 00:29:28.176 ] 00:29:28.176 }' 00:29:28.176 18:56:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.176 18:56:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.435 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:28.435 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:28.694 [2024-07-25 18:56:29.268021] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:28.952 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:29:28.952 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.952 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.211 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:29.211 [2024-07-25 18:56:29.763925] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:29.211 /dev/nbd0 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.470 1+0 records in 00:29:29.470 1+0 records out 00:29:29.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298716 s, 13.7 MB/s 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.470 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:29.471 18:56:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:29.471 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.471 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.471 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:29.471 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:29.471 18:56:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:29:36.030 63488+0 records in 00:29:36.030 63488+0 records out 00:29:36.030 32505856 bytes (33 MB, 31 MiB) copied, 5.63113 s, 5.8 MB/s 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.030 [2024-07-25 18:56:35.726806] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:36.030 [2024-07-25 18:56:35.898565] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.030 18:56:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.030 18:56:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:36.030 "name": "raid_bdev1", 00:29:36.030 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:36.030 "strip_size_kb": 0, 00:29:36.030 "state": "online", 00:29:36.030 "raid_level": "raid1", 00:29:36.030 "superblock": true, 00:29:36.030 "num_base_bdevs": 4, 00:29:36.030 "num_base_bdevs_discovered": 3, 00:29:36.030 "num_base_bdevs_operational": 3, 00:29:36.030 "base_bdevs_list": [ 00:29:36.030 { 00:29:36.030 "name": null, 00:29:36.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:36.030 "is_configured": false, 00:29:36.030 "data_offset": 2048, 00:29:36.030 "data_size": 63488 00:29:36.030 }, 00:29:36.030 { 00:29:36.030 "name": "BaseBdev2", 00:29:36.030 "uuid": "64c22f5b-2565-547b-b93f-edf4ed4d485c", 00:29:36.030 "is_configured": true, 00:29:36.030 "data_offset": 2048, 00:29:36.030 "data_size": 63488 00:29:36.030 }, 00:29:36.030 { 00:29:36.030 "name": "BaseBdev3", 00:29:36.030 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:36.030 "is_configured": true, 00:29:36.030 "data_offset": 2048, 00:29:36.030 "data_size": 63488 00:29:36.030 }, 00:29:36.030 { 00:29:36.030 "name": "BaseBdev4", 00:29:36.030 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:36.030 "is_configured": true, 00:29:36.030 "data_offset": 2048, 00:29:36.030 "data_size": 63488 00:29:36.030 } 00:29:36.030 ] 00:29:36.030 }' 00:29:36.030 18:56:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:36.030 18:56:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.288 18:56:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:36.547 [2024-07-25 18:56:36.918752] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:36.547 [2024-07-25 18:56:36.935073] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:29:36.547 [2024-07-25 18:56:36.937382] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:36.547 18:56:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:37.482 18:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.482 18:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.482 18:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:37.482 18:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:37.482 18:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.482 18:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.482 18:56:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.741 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.741 "name": "raid_bdev1", 00:29:37.741 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:37.741 "strip_size_kb": 0, 00:29:37.741 "state": "online", 00:29:37.741 "raid_level": "raid1", 00:29:37.741 "superblock": true, 00:29:37.741 "num_base_bdevs": 4, 00:29:37.741 "num_base_bdevs_discovered": 4, 00:29:37.741 "num_base_bdevs_operational": 4, 00:29:37.741 "process": { 00:29:37.741 "type": "rebuild", 00:29:37.741 "target": "spare", 00:29:37.741 "progress": { 00:29:37.741 "blocks": 24576, 00:29:37.741 "percent": 38 00:29:37.741 } 00:29:37.741 }, 00:29:37.741 "base_bdevs_list": [ 00:29:37.741 { 00:29:37.741 "name": "spare", 00:29:37.741 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:37.741 "is_configured": true, 00:29:37.741 "data_offset": 2048, 00:29:37.741 "data_size": 63488 00:29:37.741 }, 00:29:37.741 { 00:29:37.741 "name": "BaseBdev2", 00:29:37.741 "uuid": "64c22f5b-2565-547b-b93f-edf4ed4d485c", 00:29:37.741 "is_configured": true, 00:29:37.741 "data_offset": 2048, 00:29:37.741 "data_size": 63488 00:29:37.741 }, 00:29:37.741 { 00:29:37.741 "name": "BaseBdev3", 00:29:37.741 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:37.741 "is_configured": true, 00:29:37.741 "data_offset": 2048, 00:29:37.741 "data_size": 63488 00:29:37.741 }, 00:29:37.741 { 00:29:37.741 "name": "BaseBdev4", 00:29:37.741 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:37.741 "is_configured": true, 00:29:37.741 "data_offset": 2048, 00:29:37.741 "data_size": 63488 00:29:37.741 } 00:29:37.741 ] 00:29:37.741 }' 00:29:37.741 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:37.741 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:37.741 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.741 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.741 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:38.000 [2024-07-25 18:56:38.543586] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:38.000 [2024-07-25 18:56:38.548826] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:38.000 [2024-07-25 18:56:38.548924] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:38.000 [2024-07-25 18:56:38.548942] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:38.000 [2024-07-25 18:56:38.548949] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:38.259 "name": "raid_bdev1", 00:29:38.259 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:38.259 "strip_size_kb": 0, 00:29:38.259 "state": "online", 00:29:38.259 "raid_level": "raid1", 00:29:38.259 "superblock": true, 00:29:38.259 "num_base_bdevs": 4, 00:29:38.259 "num_base_bdevs_discovered": 3, 00:29:38.259 "num_base_bdevs_operational": 3, 00:29:38.259 "base_bdevs_list": [ 00:29:38.259 { 00:29:38.259 "name": null, 00:29:38.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.259 "is_configured": false, 00:29:38.259 "data_offset": 2048, 00:29:38.259 "data_size": 63488 00:29:38.259 }, 00:29:38.259 { 00:29:38.259 "name": "BaseBdev2", 00:29:38.259 "uuid": "64c22f5b-2565-547b-b93f-edf4ed4d485c", 00:29:38.259 "is_configured": true, 00:29:38.259 "data_offset": 2048, 00:29:38.259 "data_size": 63488 00:29:38.259 }, 00:29:38.259 { 00:29:38.259 "name": "BaseBdev3", 00:29:38.259 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:38.259 "is_configured": true, 00:29:38.259 "data_offset": 2048, 00:29:38.259 "data_size": 63488 00:29:38.259 }, 00:29:38.259 { 00:29:38.259 "name": "BaseBdev4", 00:29:38.259 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:38.259 "is_configured": true, 00:29:38.259 "data_offset": 2048, 00:29:38.259 "data_size": 63488 00:29:38.259 } 00:29:38.259 ] 00:29:38.259 }' 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:38.259 18:56:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:38.827 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:38.827 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:38.827 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:38.827 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:38.827 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:38.827 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.827 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.086 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.086 "name": "raid_bdev1", 00:29:39.086 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:39.086 "strip_size_kb": 0, 00:29:39.086 "state": "online", 00:29:39.086 "raid_level": "raid1", 00:29:39.086 "superblock": true, 00:29:39.086 "num_base_bdevs": 4, 00:29:39.086 "num_base_bdevs_discovered": 3, 00:29:39.086 "num_base_bdevs_operational": 3, 00:29:39.086 "base_bdevs_list": [ 00:29:39.086 { 00:29:39.086 "name": null, 00:29:39.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:39.086 "is_configured": false, 00:29:39.086 "data_offset": 2048, 00:29:39.086 "data_size": 63488 00:29:39.086 }, 00:29:39.086 { 00:29:39.086 "name": "BaseBdev2", 00:29:39.086 "uuid": "64c22f5b-2565-547b-b93f-edf4ed4d485c", 00:29:39.086 "is_configured": true, 00:29:39.086 "data_offset": 2048, 00:29:39.086 "data_size": 63488 00:29:39.086 }, 00:29:39.086 { 00:29:39.086 "name": "BaseBdev3", 00:29:39.086 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:39.086 "is_configured": true, 00:29:39.086 "data_offset": 2048, 00:29:39.086 "data_size": 63488 00:29:39.086 }, 00:29:39.086 { 00:29:39.086 "name": "BaseBdev4", 00:29:39.086 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:39.086 "is_configured": true, 00:29:39.086 "data_offset": 2048, 00:29:39.086 "data_size": 63488 00:29:39.086 } 00:29:39.086 ] 00:29:39.086 }' 00:29:39.086 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.086 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:39.086 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.086 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:39.086 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:39.345 [2024-07-25 18:56:39.784680] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:39.345 [2024-07-25 18:56:39.799803] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:29:39.345 [2024-07-25 18:56:39.802056] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:39.345 18:56:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:40.281 18:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.281 18:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.281 18:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:40.281 18:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:40.281 18:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.281 18:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.281 18:56:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.540 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.540 "name": "raid_bdev1", 00:29:40.540 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:40.540 "strip_size_kb": 0, 00:29:40.540 "state": "online", 00:29:40.540 "raid_level": "raid1", 00:29:40.540 "superblock": true, 00:29:40.540 "num_base_bdevs": 4, 00:29:40.540 "num_base_bdevs_discovered": 4, 00:29:40.540 "num_base_bdevs_operational": 4, 00:29:40.540 "process": { 00:29:40.540 "type": "rebuild", 00:29:40.540 "target": "spare", 00:29:40.540 "progress": { 00:29:40.540 "blocks": 24576, 00:29:40.540 "percent": 38 00:29:40.540 } 00:29:40.540 }, 00:29:40.540 "base_bdevs_list": [ 00:29:40.540 { 00:29:40.540 "name": "spare", 00:29:40.540 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:40.540 "is_configured": true, 00:29:40.540 "data_offset": 2048, 00:29:40.540 "data_size": 63488 00:29:40.540 }, 00:29:40.540 { 00:29:40.540 "name": "BaseBdev2", 00:29:40.540 "uuid": "64c22f5b-2565-547b-b93f-edf4ed4d485c", 00:29:40.540 "is_configured": true, 00:29:40.540 "data_offset": 2048, 00:29:40.540 "data_size": 63488 00:29:40.540 }, 00:29:40.540 { 00:29:40.540 "name": "BaseBdev3", 00:29:40.540 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:40.540 "is_configured": true, 00:29:40.540 "data_offset": 2048, 00:29:40.540 "data_size": 63488 00:29:40.540 }, 00:29:40.540 { 00:29:40.540 "name": "BaseBdev4", 00:29:40.540 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:40.540 "is_configured": true, 00:29:40.540 "data_offset": 2048, 00:29:40.540 "data_size": 63488 00:29:40.540 } 00:29:40.540 ] 00:29:40.540 }' 00:29:40.540 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.540 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:40.540 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.799 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.799 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:29:40.799 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:29:40.799 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:29:40.799 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:29:40.799 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:40.799 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:29:40.799 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:40.799 [2024-07-25 18:56:41.344272] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:41.058 [2024-07-25 18:56:41.514459] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.058 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.317 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.317 "name": "raid_bdev1", 00:29:41.317 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:41.317 "strip_size_kb": 0, 00:29:41.317 "state": "online", 00:29:41.317 "raid_level": "raid1", 00:29:41.317 "superblock": true, 00:29:41.317 "num_base_bdevs": 4, 00:29:41.317 "num_base_bdevs_discovered": 3, 00:29:41.317 "num_base_bdevs_operational": 3, 00:29:41.317 "process": { 00:29:41.317 "type": "rebuild", 00:29:41.317 "target": "spare", 00:29:41.317 "progress": { 00:29:41.317 "blocks": 34816, 00:29:41.317 "percent": 54 00:29:41.317 } 00:29:41.317 }, 00:29:41.317 "base_bdevs_list": [ 00:29:41.317 { 00:29:41.317 "name": "spare", 00:29:41.317 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:41.317 "is_configured": true, 00:29:41.317 "data_offset": 2048, 00:29:41.317 "data_size": 63488 00:29:41.317 }, 00:29:41.317 { 00:29:41.317 "name": null, 00:29:41.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.317 "is_configured": false, 00:29:41.317 "data_offset": 2048, 00:29:41.318 "data_size": 63488 00:29:41.318 }, 00:29:41.318 { 00:29:41.318 "name": "BaseBdev3", 00:29:41.318 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:41.318 "is_configured": true, 00:29:41.318 "data_offset": 2048, 00:29:41.318 "data_size": 63488 00:29:41.318 }, 00:29:41.318 { 00:29:41.318 "name": "BaseBdev4", 00:29:41.318 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:41.318 "is_configured": true, 00:29:41.318 "data_offset": 2048, 00:29:41.318 "data_size": 63488 00:29:41.318 } 00:29:41.318 ] 00:29:41.318 }' 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=938 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.318 18:56:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.576 18:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.576 "name": "raid_bdev1", 00:29:41.576 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:41.576 "strip_size_kb": 0, 00:29:41.576 "state": "online", 00:29:41.576 "raid_level": "raid1", 00:29:41.577 "superblock": true, 00:29:41.577 "num_base_bdevs": 4, 00:29:41.577 "num_base_bdevs_discovered": 3, 00:29:41.577 "num_base_bdevs_operational": 3, 00:29:41.577 "process": { 00:29:41.577 "type": "rebuild", 00:29:41.577 "target": "spare", 00:29:41.577 "progress": { 00:29:41.577 "blocks": 43008, 00:29:41.577 "percent": 67 00:29:41.577 } 00:29:41.577 }, 00:29:41.577 "base_bdevs_list": [ 00:29:41.577 { 00:29:41.577 "name": "spare", 00:29:41.577 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:41.577 "is_configured": true, 00:29:41.577 "data_offset": 2048, 00:29:41.577 "data_size": 63488 00:29:41.577 }, 00:29:41.577 { 00:29:41.577 "name": null, 00:29:41.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.577 "is_configured": false, 00:29:41.577 "data_offset": 2048, 00:29:41.577 "data_size": 63488 00:29:41.577 }, 00:29:41.577 { 00:29:41.577 "name": "BaseBdev3", 00:29:41.577 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:41.577 "is_configured": true, 00:29:41.577 "data_offset": 2048, 00:29:41.577 "data_size": 63488 00:29:41.577 }, 00:29:41.577 { 00:29:41.577 "name": "BaseBdev4", 00:29:41.577 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:41.577 "is_configured": true, 00:29:41.577 "data_offset": 2048, 00:29:41.577 "data_size": 63488 00:29:41.577 } 00:29:41.577 ] 00:29:41.577 }' 00:29:41.577 18:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.577 18:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:41.577 18:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.577 18:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:41.577 18:56:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:42.512 [2024-07-25 18:56:43.025044] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:42.512 [2024-07-25 18:56:43.025140] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:42.512 [2024-07-25 18:56:43.025296] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.771 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.030 "name": "raid_bdev1", 00:29:43.030 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:43.030 "strip_size_kb": 0, 00:29:43.030 "state": "online", 00:29:43.030 "raid_level": "raid1", 00:29:43.030 "superblock": true, 00:29:43.030 "num_base_bdevs": 4, 00:29:43.030 "num_base_bdevs_discovered": 3, 00:29:43.030 "num_base_bdevs_operational": 3, 00:29:43.030 "base_bdevs_list": [ 00:29:43.030 { 00:29:43.030 "name": "spare", 00:29:43.030 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:43.030 "is_configured": true, 00:29:43.030 "data_offset": 2048, 00:29:43.030 "data_size": 63488 00:29:43.030 }, 00:29:43.030 { 00:29:43.030 "name": null, 00:29:43.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.030 "is_configured": false, 00:29:43.030 "data_offset": 2048, 00:29:43.030 "data_size": 63488 00:29:43.030 }, 00:29:43.030 { 00:29:43.030 "name": "BaseBdev3", 00:29:43.030 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:43.030 "is_configured": true, 00:29:43.030 "data_offset": 2048, 00:29:43.030 "data_size": 63488 00:29:43.030 }, 00:29:43.030 { 00:29:43.030 "name": "BaseBdev4", 00:29:43.030 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:43.030 "is_configured": true, 00:29:43.030 "data_offset": 2048, 00:29:43.030 "data_size": 63488 00:29:43.030 } 00:29:43.030 ] 00:29:43.030 }' 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.030 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.288 "name": "raid_bdev1", 00:29:43.288 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:43.288 "strip_size_kb": 0, 00:29:43.288 "state": "online", 00:29:43.288 "raid_level": "raid1", 00:29:43.288 "superblock": true, 00:29:43.288 "num_base_bdevs": 4, 00:29:43.288 "num_base_bdevs_discovered": 3, 00:29:43.288 "num_base_bdevs_operational": 3, 00:29:43.288 "base_bdevs_list": [ 00:29:43.288 { 00:29:43.288 "name": "spare", 00:29:43.288 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:43.288 "is_configured": true, 00:29:43.288 "data_offset": 2048, 00:29:43.288 "data_size": 63488 00:29:43.288 }, 00:29:43.288 { 00:29:43.288 "name": null, 00:29:43.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.288 "is_configured": false, 00:29:43.288 "data_offset": 2048, 00:29:43.288 "data_size": 63488 00:29:43.288 }, 00:29:43.288 { 00:29:43.288 "name": "BaseBdev3", 00:29:43.288 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:43.288 "is_configured": true, 00:29:43.288 "data_offset": 2048, 00:29:43.288 "data_size": 63488 00:29:43.288 }, 00:29:43.288 { 00:29:43.288 "name": "BaseBdev4", 00:29:43.288 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:43.288 "is_configured": true, 00:29:43.288 "data_offset": 2048, 00:29:43.288 "data_size": 63488 00:29:43.288 } 00:29:43.288 ] 00:29:43.288 }' 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.288 18:56:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.547 18:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:43.547 "name": "raid_bdev1", 00:29:43.547 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:43.547 "strip_size_kb": 0, 00:29:43.547 "state": "online", 00:29:43.547 "raid_level": "raid1", 00:29:43.547 "superblock": true, 00:29:43.547 "num_base_bdevs": 4, 00:29:43.547 "num_base_bdevs_discovered": 3, 00:29:43.547 "num_base_bdevs_operational": 3, 00:29:43.547 "base_bdevs_list": [ 00:29:43.547 { 00:29:43.547 "name": "spare", 00:29:43.547 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:43.547 "is_configured": true, 00:29:43.547 "data_offset": 2048, 00:29:43.547 "data_size": 63488 00:29:43.547 }, 00:29:43.547 { 00:29:43.547 "name": null, 00:29:43.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.547 "is_configured": false, 00:29:43.547 "data_offset": 2048, 00:29:43.547 "data_size": 63488 00:29:43.547 }, 00:29:43.547 { 00:29:43.547 "name": "BaseBdev3", 00:29:43.547 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:43.547 "is_configured": true, 00:29:43.547 "data_offset": 2048, 00:29:43.547 "data_size": 63488 00:29:43.547 }, 00:29:43.547 { 00:29:43.547 "name": "BaseBdev4", 00:29:43.547 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:43.547 "is_configured": true, 00:29:43.547 "data_offset": 2048, 00:29:43.547 "data_size": 63488 00:29:43.547 } 00:29:43.547 ] 00:29:43.547 }' 00:29:43.547 18:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:43.547 18:56:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:44.115 18:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:44.374 [2024-07-25 18:56:44.899303] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:44.374 [2024-07-25 18:56:44.899351] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:44.374 [2024-07-25 18:56:44.899463] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:44.374 [2024-07-25 18:56:44.899568] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:44.374 [2024-07-25 18:56:44.899579] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:29:44.374 18:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:29:44.374 18:56:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:44.633 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:44.892 /dev/nbd0 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.892 1+0 records in 00:29:44.892 1+0 records out 00:29:44.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595687 s, 6.9 MB/s 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:44.892 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:45.151 /dev/nbd1 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:45.151 1+0 records in 00:29:45.151 1+0 records out 00:29:45.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661074 s, 6.2 MB/s 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:45.151 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:45.409 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:45.409 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:45.409 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:45.409 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:45.409 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:45.409 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.409 18:56:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.669 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:29:45.927 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:46.186 [2024-07-25 18:56:46.702024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:46.186 [2024-07-25 18:56:46.702131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:46.186 [2024-07-25 18:56:46.702180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:46.186 [2024-07-25 18:56:46.702213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:46.186 [2024-07-25 18:56:46.704934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:46.186 [2024-07-25 18:56:46.705006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:46.186 [2024-07-25 18:56:46.705126] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:46.186 [2024-07-25 18:56:46.705196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:46.186 [2024-07-25 18:56:46.705352] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:46.186 [2024-07-25 18:56:46.705442] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:46.186 spare 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.186 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.444 [2024-07-25 18:56:46.805523] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:29:46.444 [2024-07-25 18:56:46.805552] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:46.444 [2024-07-25 18:56:46.805765] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:29:46.444 [2024-07-25 18:56:46.806197] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:29:46.445 [2024-07-25 18:56:46.806218] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:29:46.445 [2024-07-25 18:56:46.806405] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.445 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:46.445 "name": "raid_bdev1", 00:29:46.445 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:46.445 "strip_size_kb": 0, 00:29:46.445 "state": "online", 00:29:46.445 "raid_level": "raid1", 00:29:46.445 "superblock": true, 00:29:46.445 "num_base_bdevs": 4, 00:29:46.445 "num_base_bdevs_discovered": 3, 00:29:46.445 "num_base_bdevs_operational": 3, 00:29:46.445 "base_bdevs_list": [ 00:29:46.445 { 00:29:46.445 "name": "spare", 00:29:46.445 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:46.445 "is_configured": true, 00:29:46.445 "data_offset": 2048, 00:29:46.445 "data_size": 63488 00:29:46.445 }, 00:29:46.445 { 00:29:46.445 "name": null, 00:29:46.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.445 "is_configured": false, 00:29:46.445 "data_offset": 2048, 00:29:46.445 "data_size": 63488 00:29:46.445 }, 00:29:46.445 { 00:29:46.445 "name": "BaseBdev3", 00:29:46.445 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:46.445 "is_configured": true, 00:29:46.445 "data_offset": 2048, 00:29:46.445 "data_size": 63488 00:29:46.445 }, 00:29:46.445 { 00:29:46.445 "name": "BaseBdev4", 00:29:46.445 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:46.445 "is_configured": true, 00:29:46.445 "data_offset": 2048, 00:29:46.445 "data_size": 63488 00:29:46.445 } 00:29:46.445 ] 00:29:46.445 }' 00:29:46.445 18:56:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:46.445 18:56:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:47.013 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:47.013 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.013 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:47.013 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:47.013 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.013 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.013 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.272 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.272 "name": "raid_bdev1", 00:29:47.272 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:47.272 "strip_size_kb": 0, 00:29:47.272 "state": "online", 00:29:47.272 "raid_level": "raid1", 00:29:47.272 "superblock": true, 00:29:47.272 "num_base_bdevs": 4, 00:29:47.272 "num_base_bdevs_discovered": 3, 00:29:47.272 "num_base_bdevs_operational": 3, 00:29:47.272 "base_bdevs_list": [ 00:29:47.272 { 00:29:47.272 "name": "spare", 00:29:47.272 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:47.272 "is_configured": true, 00:29:47.272 "data_offset": 2048, 00:29:47.272 "data_size": 63488 00:29:47.272 }, 00:29:47.272 { 00:29:47.272 "name": null, 00:29:47.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.272 "is_configured": false, 00:29:47.272 "data_offset": 2048, 00:29:47.272 "data_size": 63488 00:29:47.272 }, 00:29:47.272 { 00:29:47.272 "name": "BaseBdev3", 00:29:47.272 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:47.272 "is_configured": true, 00:29:47.272 "data_offset": 2048, 00:29:47.272 "data_size": 63488 00:29:47.272 }, 00:29:47.272 { 00:29:47.272 "name": "BaseBdev4", 00:29:47.272 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:47.272 "is_configured": true, 00:29:47.272 "data_offset": 2048, 00:29:47.272 "data_size": 63488 00:29:47.272 } 00:29:47.272 ] 00:29:47.272 }' 00:29:47.272 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.272 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:47.272 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.272 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:47.272 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:47.272 18:56:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.531 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:29:47.531 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:47.790 [2024-07-25 18:56:48.194693] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.790 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.048 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:48.049 "name": "raid_bdev1", 00:29:48.049 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:48.049 "strip_size_kb": 0, 00:29:48.049 "state": "online", 00:29:48.049 "raid_level": "raid1", 00:29:48.049 "superblock": true, 00:29:48.049 "num_base_bdevs": 4, 00:29:48.049 "num_base_bdevs_discovered": 2, 00:29:48.049 "num_base_bdevs_operational": 2, 00:29:48.049 "base_bdevs_list": [ 00:29:48.049 { 00:29:48.049 "name": null, 00:29:48.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.049 "is_configured": false, 00:29:48.049 "data_offset": 2048, 00:29:48.049 "data_size": 63488 00:29:48.049 }, 00:29:48.049 { 00:29:48.049 "name": null, 00:29:48.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.049 "is_configured": false, 00:29:48.049 "data_offset": 2048, 00:29:48.049 "data_size": 63488 00:29:48.049 }, 00:29:48.049 { 00:29:48.049 "name": "BaseBdev3", 00:29:48.049 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:48.049 "is_configured": true, 00:29:48.049 "data_offset": 2048, 00:29:48.049 "data_size": 63488 00:29:48.049 }, 00:29:48.049 { 00:29:48.049 "name": "BaseBdev4", 00:29:48.049 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:48.049 "is_configured": true, 00:29:48.049 "data_offset": 2048, 00:29:48.049 "data_size": 63488 00:29:48.049 } 00:29:48.049 ] 00:29:48.049 }' 00:29:48.049 18:56:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:48.049 18:56:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:48.615 18:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:48.874 [2024-07-25 18:56:49.218914] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:48.874 [2024-07-25 18:56:49.219151] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:48.874 [2024-07-25 18:56:49.219164] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:48.874 [2024-07-25 18:56:49.219238] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:48.874 [2024-07-25 18:56:49.235663] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:29:48.874 [2024-07-25 18:56:49.237991] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:48.874 18:56:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:29:49.811 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.811 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:49.811 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:49.811 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:49.811 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:49.811 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.811 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.070 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:50.070 "name": "raid_bdev1", 00:29:50.070 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:50.070 "strip_size_kb": 0, 00:29:50.070 "state": "online", 00:29:50.070 "raid_level": "raid1", 00:29:50.070 "superblock": true, 00:29:50.070 "num_base_bdevs": 4, 00:29:50.070 "num_base_bdevs_discovered": 3, 00:29:50.070 "num_base_bdevs_operational": 3, 00:29:50.070 "process": { 00:29:50.070 "type": "rebuild", 00:29:50.070 "target": "spare", 00:29:50.070 "progress": { 00:29:50.070 "blocks": 22528, 00:29:50.070 "percent": 35 00:29:50.070 } 00:29:50.070 }, 00:29:50.070 "base_bdevs_list": [ 00:29:50.070 { 00:29:50.070 "name": "spare", 00:29:50.070 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:50.070 "is_configured": true, 00:29:50.070 "data_offset": 2048, 00:29:50.070 "data_size": 63488 00:29:50.070 }, 00:29:50.070 { 00:29:50.070 "name": null, 00:29:50.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.070 "is_configured": false, 00:29:50.070 "data_offset": 2048, 00:29:50.070 "data_size": 63488 00:29:50.070 }, 00:29:50.070 { 00:29:50.070 "name": "BaseBdev3", 00:29:50.070 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:50.070 "is_configured": true, 00:29:50.070 "data_offset": 2048, 00:29:50.070 "data_size": 63488 00:29:50.070 }, 00:29:50.070 { 00:29:50.070 "name": "BaseBdev4", 00:29:50.070 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:50.070 "is_configured": true, 00:29:50.070 "data_offset": 2048, 00:29:50.070 "data_size": 63488 00:29:50.070 } 00:29:50.070 ] 00:29:50.070 }' 00:29:50.070 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:50.070 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:50.070 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:50.070 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:50.070 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:50.329 [2024-07-25 18:56:50.759737] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:50.329 [2024-07-25 18:56:50.849691] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:50.329 [2024-07-25 18:56:50.849797] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.329 [2024-07-25 18:56:50.849829] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:50.329 [2024-07-25 18:56:50.849837] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.329 18:56:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.588 18:56:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:50.588 "name": "raid_bdev1", 00:29:50.588 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:50.588 "strip_size_kb": 0, 00:29:50.588 "state": "online", 00:29:50.588 "raid_level": "raid1", 00:29:50.588 "superblock": true, 00:29:50.588 "num_base_bdevs": 4, 00:29:50.588 "num_base_bdevs_discovered": 2, 00:29:50.588 "num_base_bdevs_operational": 2, 00:29:50.588 "base_bdevs_list": [ 00:29:50.588 { 00:29:50.588 "name": null, 00:29:50.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.588 "is_configured": false, 00:29:50.588 "data_offset": 2048, 00:29:50.588 "data_size": 63488 00:29:50.588 }, 00:29:50.588 { 00:29:50.588 "name": null, 00:29:50.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.588 "is_configured": false, 00:29:50.588 "data_offset": 2048, 00:29:50.588 "data_size": 63488 00:29:50.588 }, 00:29:50.588 { 00:29:50.588 "name": "BaseBdev3", 00:29:50.588 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:50.588 "is_configured": true, 00:29:50.588 "data_offset": 2048, 00:29:50.588 "data_size": 63488 00:29:50.588 }, 00:29:50.588 { 00:29:50.588 "name": "BaseBdev4", 00:29:50.588 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:50.588 "is_configured": true, 00:29:50.588 "data_offset": 2048, 00:29:50.588 "data_size": 63488 00:29:50.588 } 00:29:50.588 ] 00:29:50.588 }' 00:29:50.588 18:56:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:50.588 18:56:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.156 18:56:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:51.415 [2024-07-25 18:56:51.892132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:51.415 [2024-07-25 18:56:51.892252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.415 [2024-07-25 18:56:51.892300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:29:51.415 [2024-07-25 18:56:51.892322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.415 [2024-07-25 18:56:51.892916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.415 [2024-07-25 18:56:51.892959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:51.415 [2024-07-25 18:56:51.893096] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:51.415 [2024-07-25 18:56:51.893109] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:51.415 [2024-07-25 18:56:51.893117] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:51.415 [2024-07-25 18:56:51.893150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:51.415 [2024-07-25 18:56:51.909571] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc23d0 00:29:51.415 spare 00:29:51.415 [2024-07-25 18:56:51.911931] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:51.415 18:56:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:29:52.352 18:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:52.352 18:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:52.352 18:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:52.352 18:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:52.352 18:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:52.611 18:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.611 18:56:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.611 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:52.611 "name": "raid_bdev1", 00:29:52.611 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:52.611 "strip_size_kb": 0, 00:29:52.611 "state": "online", 00:29:52.611 "raid_level": "raid1", 00:29:52.611 "superblock": true, 00:29:52.611 "num_base_bdevs": 4, 00:29:52.611 "num_base_bdevs_discovered": 3, 00:29:52.611 "num_base_bdevs_operational": 3, 00:29:52.611 "process": { 00:29:52.611 "type": "rebuild", 00:29:52.611 "target": "spare", 00:29:52.611 "progress": { 00:29:52.611 "blocks": 22528, 00:29:52.611 "percent": 35 00:29:52.611 } 00:29:52.611 }, 00:29:52.611 "base_bdevs_list": [ 00:29:52.611 { 00:29:52.611 "name": "spare", 00:29:52.611 "uuid": "aaa2b9d3-f64d-518b-bb7a-be15e4f83801", 00:29:52.611 "is_configured": true, 00:29:52.611 "data_offset": 2048, 00:29:52.611 "data_size": 63488 00:29:52.611 }, 00:29:52.611 { 00:29:52.611 "name": null, 00:29:52.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.611 "is_configured": false, 00:29:52.611 "data_offset": 2048, 00:29:52.611 "data_size": 63488 00:29:52.611 }, 00:29:52.611 { 00:29:52.611 "name": "BaseBdev3", 00:29:52.611 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:52.611 "is_configured": true, 00:29:52.611 "data_offset": 2048, 00:29:52.611 "data_size": 63488 00:29:52.611 }, 00:29:52.611 { 00:29:52.611 "name": "BaseBdev4", 00:29:52.611 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:52.611 "is_configured": true, 00:29:52.611 "data_offset": 2048, 00:29:52.611 "data_size": 63488 00:29:52.611 } 00:29:52.611 ] 00:29:52.611 }' 00:29:52.611 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:52.611 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:52.611 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:52.870 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:52.870 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:52.870 [2024-07-25 18:56:53.350304] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:52.870 [2024-07-25 18:56:53.423718] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:52.870 [2024-07-25 18:56:53.423792] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:52.870 [2024-07-25 18:56:53.423825] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:52.870 [2024-07-25 18:56:53.423832] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.129 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.387 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.387 "name": "raid_bdev1", 00:29:53.387 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:53.387 "strip_size_kb": 0, 00:29:53.387 "state": "online", 00:29:53.387 "raid_level": "raid1", 00:29:53.387 "superblock": true, 00:29:53.387 "num_base_bdevs": 4, 00:29:53.387 "num_base_bdevs_discovered": 2, 00:29:53.387 "num_base_bdevs_operational": 2, 00:29:53.387 "base_bdevs_list": [ 00:29:53.387 { 00:29:53.387 "name": null, 00:29:53.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.387 "is_configured": false, 00:29:53.387 "data_offset": 2048, 00:29:53.387 "data_size": 63488 00:29:53.387 }, 00:29:53.387 { 00:29:53.387 "name": null, 00:29:53.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.387 "is_configured": false, 00:29:53.387 "data_offset": 2048, 00:29:53.387 "data_size": 63488 00:29:53.387 }, 00:29:53.387 { 00:29:53.387 "name": "BaseBdev3", 00:29:53.387 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:53.387 "is_configured": true, 00:29:53.387 "data_offset": 2048, 00:29:53.387 "data_size": 63488 00:29:53.387 }, 00:29:53.387 { 00:29:53.387 "name": "BaseBdev4", 00:29:53.387 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:53.387 "is_configured": true, 00:29:53.387 "data_offset": 2048, 00:29:53.387 "data_size": 63488 00:29:53.387 } 00:29:53.387 ] 00:29:53.387 }' 00:29:53.387 18:56:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.387 18:56:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:53.646 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:53.646 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.646 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:53.646 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:53.646 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.646 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.646 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.905 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:53.905 "name": "raid_bdev1", 00:29:53.905 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:53.905 "strip_size_kb": 0, 00:29:53.905 "state": "online", 00:29:53.905 "raid_level": "raid1", 00:29:53.905 "superblock": true, 00:29:53.905 "num_base_bdevs": 4, 00:29:53.905 "num_base_bdevs_discovered": 2, 00:29:53.905 "num_base_bdevs_operational": 2, 00:29:53.905 "base_bdevs_list": [ 00:29:53.905 { 00:29:53.905 "name": null, 00:29:53.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.905 "is_configured": false, 00:29:53.905 "data_offset": 2048, 00:29:53.905 "data_size": 63488 00:29:53.905 }, 00:29:53.905 { 00:29:53.905 "name": null, 00:29:53.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.905 "is_configured": false, 00:29:53.905 "data_offset": 2048, 00:29:53.905 "data_size": 63488 00:29:53.905 }, 00:29:53.905 { 00:29:53.905 "name": "BaseBdev3", 00:29:53.905 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:53.905 "is_configured": true, 00:29:53.905 "data_offset": 2048, 00:29:53.905 "data_size": 63488 00:29:53.905 }, 00:29:53.905 { 00:29:53.905 "name": "BaseBdev4", 00:29:53.905 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:53.905 "is_configured": true, 00:29:53.905 "data_offset": 2048, 00:29:53.905 "data_size": 63488 00:29:53.905 } 00:29:53.905 ] 00:29:53.905 }' 00:29:53.905 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:53.905 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:53.905 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:53.905 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:53.905 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:54.164 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:54.423 [2024-07-25 18:56:54.977682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:54.423 [2024-07-25 18:56:54.977804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:54.423 [2024-07-25 18:56:54.977852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:29:54.423 [2024-07-25 18:56:54.977874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:54.423 [2024-07-25 18:56:54.978414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:54.423 [2024-07-25 18:56:54.978452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:54.423 [2024-07-25 18:56:54.978581] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:54.423 [2024-07-25 18:56:54.978594] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:54.423 [2024-07-25 18:56:54.978603] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:54.423 BaseBdev1 00:29:54.423 18:56:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:55.800 18:56:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:55.800 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.800 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.800 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:55.800 "name": "raid_bdev1", 00:29:55.800 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:55.800 "strip_size_kb": 0, 00:29:55.800 "state": "online", 00:29:55.800 "raid_level": "raid1", 00:29:55.800 "superblock": true, 00:29:55.800 "num_base_bdevs": 4, 00:29:55.800 "num_base_bdevs_discovered": 2, 00:29:55.800 "num_base_bdevs_operational": 2, 00:29:55.800 "base_bdevs_list": [ 00:29:55.800 { 00:29:55.800 "name": null, 00:29:55.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.800 "is_configured": false, 00:29:55.800 "data_offset": 2048, 00:29:55.800 "data_size": 63488 00:29:55.800 }, 00:29:55.800 { 00:29:55.800 "name": null, 00:29:55.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.800 "is_configured": false, 00:29:55.800 "data_offset": 2048, 00:29:55.800 "data_size": 63488 00:29:55.800 }, 00:29:55.800 { 00:29:55.800 "name": "BaseBdev3", 00:29:55.800 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:55.800 "is_configured": true, 00:29:55.800 "data_offset": 2048, 00:29:55.800 "data_size": 63488 00:29:55.800 }, 00:29:55.800 { 00:29:55.800 "name": "BaseBdev4", 00:29:55.800 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:55.800 "is_configured": true, 00:29:55.800 "data_offset": 2048, 00:29:55.800 "data_size": 63488 00:29:55.800 } 00:29:55.800 ] 00:29:55.800 }' 00:29:55.800 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:55.800 18:56:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:56.368 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:56.368 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.368 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:56.368 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:56.368 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.368 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.368 18:56:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.627 "name": "raid_bdev1", 00:29:56.627 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:56.627 "strip_size_kb": 0, 00:29:56.627 "state": "online", 00:29:56.627 "raid_level": "raid1", 00:29:56.627 "superblock": true, 00:29:56.627 "num_base_bdevs": 4, 00:29:56.627 "num_base_bdevs_discovered": 2, 00:29:56.627 "num_base_bdevs_operational": 2, 00:29:56.627 "base_bdevs_list": [ 00:29:56.627 { 00:29:56.627 "name": null, 00:29:56.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.627 "is_configured": false, 00:29:56.627 "data_offset": 2048, 00:29:56.627 "data_size": 63488 00:29:56.627 }, 00:29:56.627 { 00:29:56.627 "name": null, 00:29:56.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.627 "is_configured": false, 00:29:56.627 "data_offset": 2048, 00:29:56.627 "data_size": 63488 00:29:56.627 }, 00:29:56.627 { 00:29:56.627 "name": "BaseBdev3", 00:29:56.627 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:56.627 "is_configured": true, 00:29:56.627 "data_offset": 2048, 00:29:56.627 "data_size": 63488 00:29:56.627 }, 00:29:56.627 { 00:29:56.627 "name": "BaseBdev4", 00:29:56.627 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:56.627 "is_configured": true, 00:29:56.627 "data_offset": 2048, 00:29:56.627 "data_size": 63488 00:29:56.627 } 00:29:56.627 ] 00:29:56.627 }' 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:56.627 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:56.886 [2024-07-25 18:56:57.246418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:56.886 [2024-07-25 18:56:57.246603] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:56.886 [2024-07-25 18:56:57.246615] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:56.886 request: 00:29:56.886 { 00:29:56.886 "base_bdev": "BaseBdev1", 00:29:56.886 "raid_bdev": "raid_bdev1", 00:29:56.886 "method": "bdev_raid_add_base_bdev", 00:29:56.886 "req_id": 1 00:29:56.886 } 00:29:56.886 Got JSON-RPC error response 00:29:56.886 response: 00:29:56.886 { 00:29:56.886 "code": -22, 00:29:56.886 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:56.886 } 00:29:56.886 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:29:56.886 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:56.886 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:56.886 18:56:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:56.886 18:56:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.823 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.082 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:58.082 "name": "raid_bdev1", 00:29:58.082 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:58.082 "strip_size_kb": 0, 00:29:58.082 "state": "online", 00:29:58.082 "raid_level": "raid1", 00:29:58.082 "superblock": true, 00:29:58.082 "num_base_bdevs": 4, 00:29:58.082 "num_base_bdevs_discovered": 2, 00:29:58.082 "num_base_bdevs_operational": 2, 00:29:58.082 "base_bdevs_list": [ 00:29:58.082 { 00:29:58.082 "name": null, 00:29:58.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.082 "is_configured": false, 00:29:58.082 "data_offset": 2048, 00:29:58.082 "data_size": 63488 00:29:58.082 }, 00:29:58.082 { 00:29:58.082 "name": null, 00:29:58.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.082 "is_configured": false, 00:29:58.082 "data_offset": 2048, 00:29:58.082 "data_size": 63488 00:29:58.082 }, 00:29:58.082 { 00:29:58.082 "name": "BaseBdev3", 00:29:58.082 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:58.082 "is_configured": true, 00:29:58.082 "data_offset": 2048, 00:29:58.082 "data_size": 63488 00:29:58.082 }, 00:29:58.082 { 00:29:58.082 "name": "BaseBdev4", 00:29:58.082 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:58.082 "is_configured": true, 00:29:58.082 "data_offset": 2048, 00:29:58.082 "data_size": 63488 00:29:58.082 } 00:29:58.082 ] 00:29:58.082 }' 00:29:58.082 18:56:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:58.082 18:56:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.650 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:58.650 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:58.650 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:58.650 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:58.650 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:58.650 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.650 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:58.909 "name": "raid_bdev1", 00:29:58.909 "uuid": "c9112bf5-331a-4196-8cd2-eeaea89ebbd2", 00:29:58.909 "strip_size_kb": 0, 00:29:58.909 "state": "online", 00:29:58.909 "raid_level": "raid1", 00:29:58.909 "superblock": true, 00:29:58.909 "num_base_bdevs": 4, 00:29:58.909 "num_base_bdevs_discovered": 2, 00:29:58.909 "num_base_bdevs_operational": 2, 00:29:58.909 "base_bdevs_list": [ 00:29:58.909 { 00:29:58.909 "name": null, 00:29:58.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.909 "is_configured": false, 00:29:58.909 "data_offset": 2048, 00:29:58.909 "data_size": 63488 00:29:58.909 }, 00:29:58.909 { 00:29:58.909 "name": null, 00:29:58.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.909 "is_configured": false, 00:29:58.909 "data_offset": 2048, 00:29:58.909 "data_size": 63488 00:29:58.909 }, 00:29:58.909 { 00:29:58.909 "name": "BaseBdev3", 00:29:58.909 "uuid": "6cac5b26-51da-5984-bfc7-1a6367a56386", 00:29:58.909 "is_configured": true, 00:29:58.909 "data_offset": 2048, 00:29:58.909 "data_size": 63488 00:29:58.909 }, 00:29:58.909 { 00:29:58.909 "name": "BaseBdev4", 00:29:58.909 "uuid": "3e2a08d0-8e81-5e9f-801a-4842f5c67911", 00:29:58.909 "is_configured": true, 00:29:58.909 "data_offset": 2048, 00:29:58.909 "data_size": 63488 00:29:58.909 } 00:29:58.909 ] 00:29:58.909 }' 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 146772 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 146772 ']' 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 146772 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 146772 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:58.909 killing process with pid 146772 00:29:58.909 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 146772' 00:29:58.909 Received shutdown signal, test time was about 60.000000 seconds 00:29:58.909 00:29:58.910 Latency(us) 00:29:58.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.910 =================================================================================================================== 00:29:58.910 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:58.910 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 146772 00:29:58.910 [2024-07-25 18:56:59.368135] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:58.910 18:56:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 146772 00:29:58.910 [2024-07-25 18:56:59.368272] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:58.910 [2024-07-25 18:56:59.368349] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:58.910 [2024-07-25 18:56:59.368358] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:29:59.479 [2024-07-25 18:56:59.897066] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:30:00.857 00:30:00.857 real 0m36.529s 00:30:00.857 user 0m52.267s 00:30:00.857 sys 0m6.165s 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.857 ************************************ 00:30:00.857 END TEST raid_rebuild_test_sb 00:30:00.857 ************************************ 00:30:00.857 18:57:01 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:00.857 18:57:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:00.857 18:57:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:00.857 18:57:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:00.857 ************************************ 00:30:00.857 START TEST raid_rebuild_test_io 00:30:00.857 ************************************ 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev4 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=147713 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 147713 /var/tmp/spdk-raid.sock 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 147713 ']' 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:00.857 18:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.858 18:57:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:00.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:00.858 18:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:00.858 18:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.858 18:57:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:01.117 [2024-07-25 18:57:01.493100] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:01.117 [2024-07-25 18:57:01.493918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147713 ] 00:30:01.117 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:01.117 Zero copy mechanism will not be used. 00:30:01.117 [2024-07-25 18:57:01.657542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.376 [2024-07-25 18:57:01.894271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.635 [2024-07-25 18:57:02.166598] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:01.894 18:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.894 18:57:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:30:01.894 18:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:01.894 18:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:02.152 BaseBdev1_malloc 00:30:02.152 18:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:02.411 [2024-07-25 18:57:02.811109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:02.411 [2024-07-25 18:57:02.811236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:02.411 [2024-07-25 18:57:02.811277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:02.411 [2024-07-25 18:57:02.811300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.411 [2024-07-25 18:57:02.814181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.411 [2024-07-25 18:57:02.814246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:02.411 BaseBdev1 00:30:02.411 18:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:02.411 18:57:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:02.670 BaseBdev2_malloc 00:30:02.670 18:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:02.933 [2024-07-25 18:57:03.303212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:02.933 [2024-07-25 18:57:03.303333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:02.933 [2024-07-25 18:57:03.303390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:02.933 [2024-07-25 18:57:03.303412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.933 [2024-07-25 18:57:03.306081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.933 [2024-07-25 18:57:03.306148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:02.933 BaseBdev2 00:30:02.933 18:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:02.933 18:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:03.195 BaseBdev3_malloc 00:30:03.195 18:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:03.453 [2024-07-25 18:57:03.935501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:03.454 [2024-07-25 18:57:03.935587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:03.454 [2024-07-25 18:57:03.935638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:03.454 [2024-07-25 18:57:03.935666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:03.454 [2024-07-25 18:57:03.938207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:03.454 [2024-07-25 18:57:03.938276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:03.454 BaseBdev3 00:30:03.454 18:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:03.454 18:57:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:03.712 BaseBdev4_malloc 00:30:03.712 18:57:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:03.973 [2024-07-25 18:57:04.464240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:03.973 [2024-07-25 18:57:04.464360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:03.973 [2024-07-25 18:57:04.464401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:03.973 [2024-07-25 18:57:04.464434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:03.973 [2024-07-25 18:57:04.467064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:03.973 [2024-07-25 18:57:04.467120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:03.973 BaseBdev4 00:30:03.973 18:57:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:04.271 spare_malloc 00:30:04.271 18:57:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:04.554 spare_delay 00:30:04.554 18:57:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:04.554 [2024-07-25 18:57:05.104055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:04.554 [2024-07-25 18:57:05.104176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:04.554 [2024-07-25 18:57:05.104211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:04.554 [2024-07-25 18:57:05.104245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:04.554 [2024-07-25 18:57:05.106926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:04.554 [2024-07-25 18:57:05.106997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:04.554 spare 00:30:04.554 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:04.813 [2024-07-25 18:57:05.280132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:04.813 [2024-07-25 18:57:05.282390] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:04.813 [2024-07-25 18:57:05.282479] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:04.813 [2024-07-25 18:57:05.282541] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:04.813 [2024-07-25 18:57:05.282635] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:30:04.813 [2024-07-25 18:57:05.282644] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:04.813 [2024-07-25 18:57:05.282812] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:04.813 [2024-07-25 18:57:05.283183] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:30:04.813 [2024-07-25 18:57:05.283204] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:30:04.813 [2024-07-25 18:57:05.283385] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.813 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.072 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:05.072 "name": "raid_bdev1", 00:30:05.072 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:05.072 "strip_size_kb": 0, 00:30:05.072 "state": "online", 00:30:05.072 "raid_level": "raid1", 00:30:05.072 "superblock": false, 00:30:05.072 "num_base_bdevs": 4, 00:30:05.072 "num_base_bdevs_discovered": 4, 00:30:05.072 "num_base_bdevs_operational": 4, 00:30:05.072 "base_bdevs_list": [ 00:30:05.072 { 00:30:05.072 "name": "BaseBdev1", 00:30:05.072 "uuid": "c5a233c6-8767-558a-9c56-3259ddc22272", 00:30:05.072 "is_configured": true, 00:30:05.072 "data_offset": 0, 00:30:05.072 "data_size": 65536 00:30:05.072 }, 00:30:05.072 { 00:30:05.072 "name": "BaseBdev2", 00:30:05.072 "uuid": "6ca65000-5c95-59a9-9068-4af26bc61bb9", 00:30:05.072 "is_configured": true, 00:30:05.072 "data_offset": 0, 00:30:05.072 "data_size": 65536 00:30:05.072 }, 00:30:05.072 { 00:30:05.072 "name": "BaseBdev3", 00:30:05.072 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:05.072 "is_configured": true, 00:30:05.072 "data_offset": 0, 00:30:05.072 "data_size": 65536 00:30:05.072 }, 00:30:05.072 { 00:30:05.072 "name": "BaseBdev4", 00:30:05.072 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:05.072 "is_configured": true, 00:30:05.072 "data_offset": 0, 00:30:05.072 "data_size": 65536 00:30:05.072 } 00:30:05.072 ] 00:30:05.072 }' 00:30:05.072 18:57:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:05.072 18:57:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.639 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:05.639 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:05.897 [2024-07-25 18:57:06.268539] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:05.897 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:30:05.897 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.897 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:06.155 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:30:06.155 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:06.155 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:06.155 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:06.155 [2024-07-25 18:57:06.659075] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:06.155 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:06.155 Zero copy mechanism will not be used. 00:30:06.155 Running I/O for 60 seconds... 00:30:06.414 [2024-07-25 18:57:06.780382] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:06.414 [2024-07-25 18:57:06.786001] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.414 18:57:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.673 18:57:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:06.673 "name": "raid_bdev1", 00:30:06.673 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:06.673 "strip_size_kb": 0, 00:30:06.673 "state": "online", 00:30:06.673 "raid_level": "raid1", 00:30:06.673 "superblock": false, 00:30:06.673 "num_base_bdevs": 4, 00:30:06.673 "num_base_bdevs_discovered": 3, 00:30:06.673 "num_base_bdevs_operational": 3, 00:30:06.673 "base_bdevs_list": [ 00:30:06.673 { 00:30:06.673 "name": null, 00:30:06.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.673 "is_configured": false, 00:30:06.673 "data_offset": 0, 00:30:06.673 "data_size": 65536 00:30:06.673 }, 00:30:06.673 { 00:30:06.673 "name": "BaseBdev2", 00:30:06.673 "uuid": "6ca65000-5c95-59a9-9068-4af26bc61bb9", 00:30:06.673 "is_configured": true, 00:30:06.673 "data_offset": 0, 00:30:06.673 "data_size": 65536 00:30:06.673 }, 00:30:06.673 { 00:30:06.673 "name": "BaseBdev3", 00:30:06.673 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:06.673 "is_configured": true, 00:30:06.673 "data_offset": 0, 00:30:06.673 "data_size": 65536 00:30:06.673 }, 00:30:06.673 { 00:30:06.673 "name": "BaseBdev4", 00:30:06.673 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:06.673 "is_configured": true, 00:30:06.673 "data_offset": 0, 00:30:06.673 "data_size": 65536 00:30:06.673 } 00:30:06.673 ] 00:30:06.673 }' 00:30:06.673 18:57:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:06.673 18:57:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:07.240 18:57:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:07.498 [2024-07-25 18:57:07.934614] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:07.498 [2024-07-25 18:57:07.985253] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:07.498 18:57:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:07.498 [2024-07-25 18:57:07.987415] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:07.756 [2024-07-25 18:57:08.111573] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:07.756 [2024-07-25 18:57:08.113089] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:07.756 [2024-07-25 18:57:08.321052] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:07.756 [2024-07-25 18:57:08.321871] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:08.323 [2024-07-25 18:57:08.693580] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:08.323 [2024-07-25 18:57:08.694251] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:08.323 [2024-07-25 18:57:08.822222] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:08.580 18:57:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:08.580 18:57:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.580 18:57:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:08.580 18:57:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:08.580 18:57:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.580 18:57:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.580 18:57:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.581 [2024-07-25 18:57:09.058686] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:08.849 [2024-07-25 18:57:09.168387] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:08.849 [2024-07-25 18:57:09.168762] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:08.849 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:08.849 "name": "raid_bdev1", 00:30:08.849 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:08.849 "strip_size_kb": 0, 00:30:08.849 "state": "online", 00:30:08.849 "raid_level": "raid1", 00:30:08.849 "superblock": false, 00:30:08.849 "num_base_bdevs": 4, 00:30:08.849 "num_base_bdevs_discovered": 4, 00:30:08.849 "num_base_bdevs_operational": 4, 00:30:08.849 "process": { 00:30:08.849 "type": "rebuild", 00:30:08.849 "target": "spare", 00:30:08.849 "progress": { 00:30:08.849 "blocks": 16384, 00:30:08.849 "percent": 25 00:30:08.850 } 00:30:08.850 }, 00:30:08.850 "base_bdevs_list": [ 00:30:08.850 { 00:30:08.850 "name": "spare", 00:30:08.850 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:08.850 "is_configured": true, 00:30:08.850 "data_offset": 0, 00:30:08.850 "data_size": 65536 00:30:08.850 }, 00:30:08.850 { 00:30:08.850 "name": "BaseBdev2", 00:30:08.850 "uuid": "6ca65000-5c95-59a9-9068-4af26bc61bb9", 00:30:08.850 "is_configured": true, 00:30:08.850 "data_offset": 0, 00:30:08.850 "data_size": 65536 00:30:08.850 }, 00:30:08.850 { 00:30:08.850 "name": "BaseBdev3", 00:30:08.850 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:08.850 "is_configured": true, 00:30:08.850 "data_offset": 0, 00:30:08.850 "data_size": 65536 00:30:08.850 }, 00:30:08.850 { 00:30:08.850 "name": "BaseBdev4", 00:30:08.850 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:08.850 "is_configured": true, 00:30:08.850 "data_offset": 0, 00:30:08.850 "data_size": 65536 00:30:08.850 } 00:30:08.850 ] 00:30:08.850 }' 00:30:08.850 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:08.850 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:08.850 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.850 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:08.850 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:08.850 [2024-07-25 18:57:09.399415] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:09.108 [2024-07-25 18:57:09.591818] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:09.108 [2024-07-25 18:57:09.613540] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:09.366 [2024-07-25 18:57:09.716483] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:09.366 [2024-07-25 18:57:09.732516] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:09.366 [2024-07-25 18:57:09.732558] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:09.366 [2024-07-25 18:57:09.732569] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:09.366 [2024-07-25 18:57:09.756463] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.366 18:57:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.624 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.624 "name": "raid_bdev1", 00:30:09.624 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:09.624 "strip_size_kb": 0, 00:30:09.624 "state": "online", 00:30:09.624 "raid_level": "raid1", 00:30:09.624 "superblock": false, 00:30:09.624 "num_base_bdevs": 4, 00:30:09.624 "num_base_bdevs_discovered": 3, 00:30:09.624 "num_base_bdevs_operational": 3, 00:30:09.624 "base_bdevs_list": [ 00:30:09.624 { 00:30:09.624 "name": null, 00:30:09.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.624 "is_configured": false, 00:30:09.624 "data_offset": 0, 00:30:09.624 "data_size": 65536 00:30:09.624 }, 00:30:09.624 { 00:30:09.624 "name": "BaseBdev2", 00:30:09.624 "uuid": "6ca65000-5c95-59a9-9068-4af26bc61bb9", 00:30:09.624 "is_configured": true, 00:30:09.624 "data_offset": 0, 00:30:09.624 "data_size": 65536 00:30:09.624 }, 00:30:09.624 { 00:30:09.624 "name": "BaseBdev3", 00:30:09.624 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:09.624 "is_configured": true, 00:30:09.624 "data_offset": 0, 00:30:09.624 "data_size": 65536 00:30:09.624 }, 00:30:09.624 { 00:30:09.625 "name": "BaseBdev4", 00:30:09.625 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:09.625 "is_configured": true, 00:30:09.625 "data_offset": 0, 00:30:09.625 "data_size": 65536 00:30:09.625 } 00:30:09.625 ] 00:30:09.625 }' 00:30:09.625 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.625 18:57:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:10.191 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:10.191 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.191 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:10.191 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:10.191 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.191 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.191 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.450 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:10.450 "name": "raid_bdev1", 00:30:10.450 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:10.450 "strip_size_kb": 0, 00:30:10.450 "state": "online", 00:30:10.450 "raid_level": "raid1", 00:30:10.450 "superblock": false, 00:30:10.450 "num_base_bdevs": 4, 00:30:10.450 "num_base_bdevs_discovered": 3, 00:30:10.450 "num_base_bdevs_operational": 3, 00:30:10.450 "base_bdevs_list": [ 00:30:10.450 { 00:30:10.450 "name": null, 00:30:10.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.450 "is_configured": false, 00:30:10.450 "data_offset": 0, 00:30:10.450 "data_size": 65536 00:30:10.450 }, 00:30:10.450 { 00:30:10.450 "name": "BaseBdev2", 00:30:10.450 "uuid": "6ca65000-5c95-59a9-9068-4af26bc61bb9", 00:30:10.450 "is_configured": true, 00:30:10.450 "data_offset": 0, 00:30:10.450 "data_size": 65536 00:30:10.450 }, 00:30:10.450 { 00:30:10.450 "name": "BaseBdev3", 00:30:10.450 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:10.450 "is_configured": true, 00:30:10.450 "data_offset": 0, 00:30:10.450 "data_size": 65536 00:30:10.450 }, 00:30:10.450 { 00:30:10.450 "name": "BaseBdev4", 00:30:10.450 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:10.450 "is_configured": true, 00:30:10.450 "data_offset": 0, 00:30:10.450 "data_size": 65536 00:30:10.450 } 00:30:10.450 ] 00:30:10.450 }' 00:30:10.450 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:10.450 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:10.450 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.450 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:10.450 18:57:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:10.709 [2024-07-25 18:57:11.176901] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:10.709 18:57:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:10.709 [2024-07-25 18:57:11.268179] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:10.709 [2024-07-25 18:57:11.270292] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:10.968 [2024-07-25 18:57:11.386774] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:10.968 [2024-07-25 18:57:11.517573] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:10.968 [2024-07-25 18:57:11.518005] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:11.227 [2024-07-25 18:57:11.759740] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:11.227 [2024-07-25 18:57:11.761165] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:11.487 [2024-07-25 18:57:11.983091] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:11.487 [2024-07-25 18:57:11.983423] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:11.745 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:11.745 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:11.745 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:11.745 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:11.745 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:11.745 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.745 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.745 [2024-07-25 18:57:12.318067] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:11.745 [2024-07-25 18:57:12.319581] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.004 "name": "raid_bdev1", 00:30:12.004 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:12.004 "strip_size_kb": 0, 00:30:12.004 "state": "online", 00:30:12.004 "raid_level": "raid1", 00:30:12.004 "superblock": false, 00:30:12.004 "num_base_bdevs": 4, 00:30:12.004 "num_base_bdevs_discovered": 4, 00:30:12.004 "num_base_bdevs_operational": 4, 00:30:12.004 "process": { 00:30:12.004 "type": "rebuild", 00:30:12.004 "target": "spare", 00:30:12.004 "progress": { 00:30:12.004 "blocks": 14336, 00:30:12.004 "percent": 21 00:30:12.004 } 00:30:12.004 }, 00:30:12.004 "base_bdevs_list": [ 00:30:12.004 { 00:30:12.004 "name": "spare", 00:30:12.004 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:12.004 "is_configured": true, 00:30:12.004 "data_offset": 0, 00:30:12.004 "data_size": 65536 00:30:12.004 }, 00:30:12.004 { 00:30:12.004 "name": "BaseBdev2", 00:30:12.004 "uuid": "6ca65000-5c95-59a9-9068-4af26bc61bb9", 00:30:12.004 "is_configured": true, 00:30:12.004 "data_offset": 0, 00:30:12.004 "data_size": 65536 00:30:12.004 }, 00:30:12.004 { 00:30:12.004 "name": "BaseBdev3", 00:30:12.004 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:12.004 "is_configured": true, 00:30:12.004 "data_offset": 0, 00:30:12.004 "data_size": 65536 00:30:12.004 }, 00:30:12.004 { 00:30:12.004 "name": "BaseBdev4", 00:30:12.004 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:12.004 "is_configured": true, 00:30:12.004 "data_offset": 0, 00:30:12.004 "data_size": 65536 00:30:12.004 } 00:30:12.004 ] 00:30:12.004 }' 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.004 [2024-07-25 18:57:12.544293] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:30:12.004 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:12.262 [2024-07-25 18:57:12.809536] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:12.521 [2024-07-25 18:57:12.925860] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:30:12.521 [2024-07-25 18:57:12.925908] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.521 18:57:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.521 [2024-07-25 18:57:13.052438] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:12.778 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.778 "name": "raid_bdev1", 00:30:12.779 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:12.779 "strip_size_kb": 0, 00:30:12.779 "state": "online", 00:30:12.779 "raid_level": "raid1", 00:30:12.779 "superblock": false, 00:30:12.779 "num_base_bdevs": 4, 00:30:12.779 "num_base_bdevs_discovered": 3, 00:30:12.779 "num_base_bdevs_operational": 3, 00:30:12.779 "process": { 00:30:12.779 "type": "rebuild", 00:30:12.779 "target": "spare", 00:30:12.779 "progress": { 00:30:12.779 "blocks": 22528, 00:30:12.779 "percent": 34 00:30:12.779 } 00:30:12.779 }, 00:30:12.779 "base_bdevs_list": [ 00:30:12.779 { 00:30:12.779 "name": "spare", 00:30:12.779 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:12.779 "is_configured": true, 00:30:12.779 "data_offset": 0, 00:30:12.779 "data_size": 65536 00:30:12.779 }, 00:30:12.779 { 00:30:12.779 "name": null, 00:30:12.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.779 "is_configured": false, 00:30:12.779 "data_offset": 0, 00:30:12.779 "data_size": 65536 00:30:12.779 }, 00:30:12.779 { 00:30:12.779 "name": "BaseBdev3", 00:30:12.779 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:12.779 "is_configured": true, 00:30:12.779 "data_offset": 0, 00:30:12.779 "data_size": 65536 00:30:12.779 }, 00:30:12.779 { 00:30:12.779 "name": "BaseBdev4", 00:30:12.779 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:12.779 "is_configured": true, 00:30:12.779 "data_offset": 0, 00:30:12.779 "data_size": 65536 00:30:12.779 } 00:30:12.779 ] 00:30:12.779 }' 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=970 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.779 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.037 [2024-07-25 18:57:13.384808] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:13.037 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:13.037 "name": "raid_bdev1", 00:30:13.037 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:13.037 "strip_size_kb": 0, 00:30:13.037 "state": "online", 00:30:13.037 "raid_level": "raid1", 00:30:13.037 "superblock": false, 00:30:13.037 "num_base_bdevs": 4, 00:30:13.037 "num_base_bdevs_discovered": 3, 00:30:13.037 "num_base_bdevs_operational": 3, 00:30:13.037 "process": { 00:30:13.037 "type": "rebuild", 00:30:13.037 "target": "spare", 00:30:13.037 "progress": { 00:30:13.037 "blocks": 28672, 00:30:13.037 "percent": 43 00:30:13.037 } 00:30:13.037 }, 00:30:13.037 "base_bdevs_list": [ 00:30:13.037 { 00:30:13.037 "name": "spare", 00:30:13.037 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:13.037 "is_configured": true, 00:30:13.037 "data_offset": 0, 00:30:13.037 "data_size": 65536 00:30:13.037 }, 00:30:13.037 { 00:30:13.037 "name": null, 00:30:13.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.037 "is_configured": false, 00:30:13.037 "data_offset": 0, 00:30:13.037 "data_size": 65536 00:30:13.037 }, 00:30:13.037 { 00:30:13.037 "name": "BaseBdev3", 00:30:13.037 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:13.037 "is_configured": true, 00:30:13.037 "data_offset": 0, 00:30:13.037 "data_size": 65536 00:30:13.037 }, 00:30:13.037 { 00:30:13.037 "name": "BaseBdev4", 00:30:13.037 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:13.037 "is_configured": true, 00:30:13.037 "data_offset": 0, 00:30:13.037 "data_size": 65536 00:30:13.037 } 00:30:13.037 ] 00:30:13.037 }' 00:30:13.037 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:13.037 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:13.037 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:13.037 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:13.037 18:57:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:13.603 [2024-07-25 18:57:14.045090] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:13.862 [2024-07-25 18:57:14.366788] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.121 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.380 [2024-07-25 18:57:14.703929] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:14.380 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:14.380 "name": "raid_bdev1", 00:30:14.380 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:14.380 "strip_size_kb": 0, 00:30:14.380 "state": "online", 00:30:14.380 "raid_level": "raid1", 00:30:14.380 "superblock": false, 00:30:14.380 "num_base_bdevs": 4, 00:30:14.380 "num_base_bdevs_discovered": 3, 00:30:14.380 "num_base_bdevs_operational": 3, 00:30:14.380 "process": { 00:30:14.380 "type": "rebuild", 00:30:14.380 "target": "spare", 00:30:14.380 "progress": { 00:30:14.380 "blocks": 55296, 00:30:14.380 "percent": 84 00:30:14.380 } 00:30:14.380 }, 00:30:14.380 "base_bdevs_list": [ 00:30:14.380 { 00:30:14.380 "name": "spare", 00:30:14.380 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:14.380 "is_configured": true, 00:30:14.380 "data_offset": 0, 00:30:14.380 "data_size": 65536 00:30:14.380 }, 00:30:14.380 { 00:30:14.380 "name": null, 00:30:14.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.380 "is_configured": false, 00:30:14.380 "data_offset": 0, 00:30:14.380 "data_size": 65536 00:30:14.380 }, 00:30:14.380 { 00:30:14.380 "name": "BaseBdev3", 00:30:14.380 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:14.380 "is_configured": true, 00:30:14.380 "data_offset": 0, 00:30:14.380 "data_size": 65536 00:30:14.380 }, 00:30:14.380 { 00:30:14.380 "name": "BaseBdev4", 00:30:14.380 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:14.380 "is_configured": true, 00:30:14.380 "data_offset": 0, 00:30:14.380 "data_size": 65536 00:30:14.380 } 00:30:14.380 ] 00:30:14.380 }' 00:30:14.380 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:14.380 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:14.380 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:14.639 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:14.639 18:57:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:14.639 [2024-07-25 18:57:15.036909] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:15.207 [2024-07-25 18:57:15.480074] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:15.207 [2024-07-25 18:57:15.580128] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:15.207 [2024-07-25 18:57:15.588807] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.466 18:57:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:15.725 "name": "raid_bdev1", 00:30:15.725 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:15.725 "strip_size_kb": 0, 00:30:15.725 "state": "online", 00:30:15.725 "raid_level": "raid1", 00:30:15.725 "superblock": false, 00:30:15.725 "num_base_bdevs": 4, 00:30:15.725 "num_base_bdevs_discovered": 3, 00:30:15.725 "num_base_bdevs_operational": 3, 00:30:15.725 "base_bdevs_list": [ 00:30:15.725 { 00:30:15.725 "name": "spare", 00:30:15.725 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:15.725 "is_configured": true, 00:30:15.725 "data_offset": 0, 00:30:15.725 "data_size": 65536 00:30:15.725 }, 00:30:15.725 { 00:30:15.725 "name": null, 00:30:15.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.725 "is_configured": false, 00:30:15.725 "data_offset": 0, 00:30:15.725 "data_size": 65536 00:30:15.725 }, 00:30:15.725 { 00:30:15.725 "name": "BaseBdev3", 00:30:15.725 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:15.725 "is_configured": true, 00:30:15.725 "data_offset": 0, 00:30:15.725 "data_size": 65536 00:30:15.725 }, 00:30:15.725 { 00:30:15.725 "name": "BaseBdev4", 00:30:15.725 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:15.725 "is_configured": true, 00:30:15.725 "data_offset": 0, 00:30:15.725 "data_size": 65536 00:30:15.725 } 00:30:15.725 ] 00:30:15.725 }' 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.725 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.984 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:15.984 "name": "raid_bdev1", 00:30:15.984 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:15.984 "strip_size_kb": 0, 00:30:15.984 "state": "online", 00:30:15.984 "raid_level": "raid1", 00:30:15.984 "superblock": false, 00:30:15.984 "num_base_bdevs": 4, 00:30:15.984 "num_base_bdevs_discovered": 3, 00:30:15.984 "num_base_bdevs_operational": 3, 00:30:15.984 "base_bdevs_list": [ 00:30:15.984 { 00:30:15.984 "name": "spare", 00:30:15.984 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:15.984 "is_configured": true, 00:30:15.984 "data_offset": 0, 00:30:15.984 "data_size": 65536 00:30:15.984 }, 00:30:15.984 { 00:30:15.984 "name": null, 00:30:15.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.984 "is_configured": false, 00:30:15.984 "data_offset": 0, 00:30:15.984 "data_size": 65536 00:30:15.984 }, 00:30:15.984 { 00:30:15.984 "name": "BaseBdev3", 00:30:15.984 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:15.984 "is_configured": true, 00:30:15.984 "data_offset": 0, 00:30:15.984 "data_size": 65536 00:30:15.984 }, 00:30:15.984 { 00:30:15.984 "name": "BaseBdev4", 00:30:15.984 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:15.984 "is_configured": true, 00:30:15.984 "data_offset": 0, 00:30:15.984 "data_size": 65536 00:30:15.984 } 00:30:15.984 ] 00:30:15.984 }' 00:30:15.984 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.242 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.501 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:16.501 "name": "raid_bdev1", 00:30:16.501 "uuid": "702a314c-d7ed-4f98-b38e-65371d78984c", 00:30:16.501 "strip_size_kb": 0, 00:30:16.501 "state": "online", 00:30:16.501 "raid_level": "raid1", 00:30:16.501 "superblock": false, 00:30:16.501 "num_base_bdevs": 4, 00:30:16.501 "num_base_bdevs_discovered": 3, 00:30:16.501 "num_base_bdevs_operational": 3, 00:30:16.501 "base_bdevs_list": [ 00:30:16.501 { 00:30:16.501 "name": "spare", 00:30:16.501 "uuid": "fcdf7998-14c3-56f6-947d-977ee7ca397d", 00:30:16.501 "is_configured": true, 00:30:16.501 "data_offset": 0, 00:30:16.501 "data_size": 65536 00:30:16.501 }, 00:30:16.501 { 00:30:16.501 "name": null, 00:30:16.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:16.501 "is_configured": false, 00:30:16.501 "data_offset": 0, 00:30:16.501 "data_size": 65536 00:30:16.501 }, 00:30:16.501 { 00:30:16.501 "name": "BaseBdev3", 00:30:16.501 "uuid": "be1eaf69-748a-5519-9ae4-6be603d19aa6", 00:30:16.501 "is_configured": true, 00:30:16.501 "data_offset": 0, 00:30:16.501 "data_size": 65536 00:30:16.501 }, 00:30:16.501 { 00:30:16.501 "name": "BaseBdev4", 00:30:16.501 "uuid": "d3c43e39-42b9-5a65-9757-584b166fb20d", 00:30:16.501 "is_configured": true, 00:30:16.501 "data_offset": 0, 00:30:16.501 "data_size": 65536 00:30:16.501 } 00:30:16.501 ] 00:30:16.501 }' 00:30:16.501 18:57:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:16.501 18:57:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.069 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:17.069 [2024-07-25 18:57:17.643271] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:17.069 [2024-07-25 18:57:17.643314] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:17.328 00:30:17.328 Latency(us) 00:30:17.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.328 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:17.328 raid_bdev1 : 11.08 106.16 318.49 0.00 0.00 13237.47 298.42 112846.75 00:30:17.328 =================================================================================================================== 00:30:17.328 Total : 106.16 318.49 0.00 0.00 13237.47 298.42 112846.75 00:30:17.328 [2024-07-25 18:57:17.761293] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.328 [2024-07-25 18:57:17.761340] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:17.328 [2024-07-25 18:57:17.761440] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:17.328 [2024-07-25 18:57:17.761449] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:30:17.328 0 00:30:17.328 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.328 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:17.587 18:57:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:17.847 /dev/nbd0 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:17.847 1+0 records in 00:30:17.847 1+0 records out 00:30:17.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405719 s, 10.1 MB/s 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:17.847 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:18.107 /dev/nbd1 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:18.107 1+0 records in 00:30:18.107 1+0 records out 00:30:18.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436384 s, 9.4 MB/s 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:18.107 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:18.366 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:18.366 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:18.366 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:18.366 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:18.366 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:18.366 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:18.366 18:57:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:18.625 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:18.884 /dev/nbd1 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:18.885 1+0 records in 00:30:18.885 1+0 records out 00:30:18.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244105 s, 16.8 MB/s 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:18.885 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:19.144 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 147713 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 147713 ']' 00:30:19.404 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 147713 00:30:19.663 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:30:19.663 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:19.664 18:57:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 147713 00:30:19.664 18:57:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:19.664 18:57:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:19.664 18:57:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 147713' 00:30:19.664 killing process with pid 147713 00:30:19.664 18:57:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 147713 00:30:19.664 Received shutdown signal, test time was about 13.347149 seconds 00:30:19.664 00:30:19.664 Latency(us) 00:30:19.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.664 =================================================================================================================== 00:30:19.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.664 [2024-07-25 18:57:20.009002] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:19.664 18:57:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 147713 00:30:19.923 [2024-07-25 18:57:20.475336] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:21.828 18:57:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:30:21.828 00:30:21.828 real 0m20.568s 00:30:21.828 user 0m30.759s 00:30:21.828 sys 0m3.279s 00:30:21.828 ************************************ 00:30:21.828 END TEST raid_rebuild_test_io 00:30:21.828 ************************************ 00:30:21.828 18:57:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:21.828 18:57:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:21.828 18:57:22 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:21.828 18:57:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:21.828 18:57:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:21.828 18:57:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:21.828 ************************************ 00:30:21.828 START TEST raid_rebuild_test_sb_io 00:30:21.828 ************************************ 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # echo BaseBdev4 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:21.828 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=148243 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 148243 /var/tmp/spdk-raid.sock 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 148243 ']' 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:21.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:21.829 18:57:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:21.829 [2024-07-25 18:57:22.173407] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:21.829 [2024-07-25 18:57:22.173852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148243 ] 00:30:21.829 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:21.829 Zero copy mechanism will not be used. 00:30:21.829 [2024-07-25 18:57:22.360724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.088 [2024-07-25 18:57:22.611697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.347 [2024-07-25 18:57:22.877888] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:22.606 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:22.606 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:30:22.606 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:22.606 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:22.865 BaseBdev1_malloc 00:30:22.865 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:23.124 [2024-07-25 18:57:23.548762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:23.124 [2024-07-25 18:57:23.549012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:23.124 [2024-07-25 18:57:23.549104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:23.124 [2024-07-25 18:57:23.549205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:23.124 [2024-07-25 18:57:23.551839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:23.124 [2024-07-25 18:57:23.552002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:23.124 BaseBdev1 00:30:23.124 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:23.124 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:23.382 BaseBdev2_malloc 00:30:23.382 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:23.382 [2024-07-25 18:57:23.959429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:23.382 [2024-07-25 18:57:23.959708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:23.382 [2024-07-25 18:57:23.959801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:23.382 [2024-07-25 18:57:23.959992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:23.640 [2024-07-25 18:57:23.962723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:23.640 [2024-07-25 18:57:23.962885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:23.640 BaseBdev2 00:30:23.640 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:23.640 18:57:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:23.899 BaseBdev3_malloc 00:30:23.899 18:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:24.157 [2024-07-25 18:57:24.513050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:24.157 [2024-07-25 18:57:24.513350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.157 [2024-07-25 18:57:24.513487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:24.157 [2024-07-25 18:57:24.513598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.157 [2024-07-25 18:57:24.516248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.157 [2024-07-25 18:57:24.516420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:24.157 BaseBdev3 00:30:24.157 18:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:24.157 18:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:24.157 BaseBdev4_malloc 00:30:24.415 18:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:24.415 [2024-07-25 18:57:24.912774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:24.415 [2024-07-25 18:57:24.913025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.415 [2024-07-25 18:57:24.913112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:24.415 [2024-07-25 18:57:24.913211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.415 [2024-07-25 18:57:24.915799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.415 [2024-07-25 18:57:24.915956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:24.415 BaseBdev4 00:30:24.415 18:57:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:24.673 spare_malloc 00:30:24.673 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:24.931 spare_delay 00:30:24.931 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:24.931 [2024-07-25 18:57:25.495492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:24.931 [2024-07-25 18:57:25.495761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.931 [2024-07-25 18:57:25.495848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:24.931 [2024-07-25 18:57:25.495958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.931 [2024-07-25 18:57:25.498619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.931 [2024-07-25 18:57:25.498784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:24.931 spare 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:25.239 [2024-07-25 18:57:25.679630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:25.239 [2024-07-25 18:57:25.681890] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:25.239 [2024-07-25 18:57:25.682067] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:25.239 [2024-07-25 18:57:25.682150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:25.239 [2024-07-25 18:57:25.682477] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:30:25.239 [2024-07-25 18:57:25.682572] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:25.239 [2024-07-25 18:57:25.682709] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:25.239 [2024-07-25 18:57:25.683104] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:30:25.239 [2024-07-25 18:57:25.683211] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:30:25.239 [2024-07-25 18:57:25.683403] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.239 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.497 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.497 "name": "raid_bdev1", 00:30:25.497 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:25.497 "strip_size_kb": 0, 00:30:25.497 "state": "online", 00:30:25.497 "raid_level": "raid1", 00:30:25.497 "superblock": true, 00:30:25.497 "num_base_bdevs": 4, 00:30:25.497 "num_base_bdevs_discovered": 4, 00:30:25.497 "num_base_bdevs_operational": 4, 00:30:25.497 "base_bdevs_list": [ 00:30:25.497 { 00:30:25.497 "name": "BaseBdev1", 00:30:25.497 "uuid": "f9b0be4a-2d08-5092-9e97-b25280d124e2", 00:30:25.497 "is_configured": true, 00:30:25.497 "data_offset": 2048, 00:30:25.497 "data_size": 63488 00:30:25.497 }, 00:30:25.497 { 00:30:25.497 "name": "BaseBdev2", 00:30:25.497 "uuid": "2af74197-6355-5981-b07c-9124ebea42c2", 00:30:25.497 "is_configured": true, 00:30:25.497 "data_offset": 2048, 00:30:25.497 "data_size": 63488 00:30:25.497 }, 00:30:25.497 { 00:30:25.497 "name": "BaseBdev3", 00:30:25.497 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:25.497 "is_configured": true, 00:30:25.497 "data_offset": 2048, 00:30:25.497 "data_size": 63488 00:30:25.497 }, 00:30:25.497 { 00:30:25.497 "name": "BaseBdev4", 00:30:25.497 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:25.497 "is_configured": true, 00:30:25.497 "data_offset": 2048, 00:30:25.497 "data_size": 63488 00:30:25.497 } 00:30:25.497 ] 00:30:25.497 }' 00:30:25.497 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.497 18:57:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:26.063 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:26.063 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:26.063 [2024-07-25 18:57:26.624032] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:26.321 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:30:26.321 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:26.321 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.321 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:30:26.321 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:26.321 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:26.321 18:57:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:26.321 [2024-07-25 18:57:26.897094] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:26.321 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:26.321 Zero copy mechanism will not be used. 00:30:26.321 Running I/O for 60 seconds... 00:30:26.579 [2024-07-25 18:57:26.990463] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:26.579 [2024-07-25 18:57:27.001479] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.579 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.837 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.837 "name": "raid_bdev1", 00:30:26.837 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:26.837 "strip_size_kb": 0, 00:30:26.837 "state": "online", 00:30:26.837 "raid_level": "raid1", 00:30:26.837 "superblock": true, 00:30:26.837 "num_base_bdevs": 4, 00:30:26.837 "num_base_bdevs_discovered": 3, 00:30:26.837 "num_base_bdevs_operational": 3, 00:30:26.837 "base_bdevs_list": [ 00:30:26.837 { 00:30:26.837 "name": null, 00:30:26.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.837 "is_configured": false, 00:30:26.837 "data_offset": 2048, 00:30:26.837 "data_size": 63488 00:30:26.837 }, 00:30:26.837 { 00:30:26.837 "name": "BaseBdev2", 00:30:26.837 "uuid": "2af74197-6355-5981-b07c-9124ebea42c2", 00:30:26.837 "is_configured": true, 00:30:26.837 "data_offset": 2048, 00:30:26.837 "data_size": 63488 00:30:26.837 }, 00:30:26.837 { 00:30:26.837 "name": "BaseBdev3", 00:30:26.837 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:26.837 "is_configured": true, 00:30:26.837 "data_offset": 2048, 00:30:26.837 "data_size": 63488 00:30:26.837 }, 00:30:26.837 { 00:30:26.837 "name": "BaseBdev4", 00:30:26.837 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:26.837 "is_configured": true, 00:30:26.837 "data_offset": 2048, 00:30:26.837 "data_size": 63488 00:30:26.837 } 00:30:26.837 ] 00:30:26.837 }' 00:30:26.837 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.837 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:27.402 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:27.402 [2024-07-25 18:57:27.919259] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:27.402 18:57:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:27.402 [2024-07-25 18:57:27.970204] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:30:27.402 [2024-07-25 18:57:27.972520] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:27.660 [2024-07-25 18:57:28.083335] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:27.660 [2024-07-25 18:57:28.084242] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:27.660 [2024-07-25 18:57:28.213237] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:27.660 [2024-07-25 18:57:28.214481] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:28.227 [2024-07-25 18:57:28.550492] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:28.227 [2024-07-25 18:57:28.776433] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:28.227 [2024-07-25 18:57:28.777033] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:28.485 18:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:28.485 18:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:28.485 18:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:28.485 18:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:28.485 18:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:28.485 18:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.485 18:57:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.743 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:28.743 "name": "raid_bdev1", 00:30:28.743 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:28.743 "strip_size_kb": 0, 00:30:28.743 "state": "online", 00:30:28.743 "raid_level": "raid1", 00:30:28.743 "superblock": true, 00:30:28.743 "num_base_bdevs": 4, 00:30:28.743 "num_base_bdevs_discovered": 4, 00:30:28.743 "num_base_bdevs_operational": 4, 00:30:28.743 "process": { 00:30:28.743 "type": "rebuild", 00:30:28.743 "target": "spare", 00:30:28.743 "progress": { 00:30:28.743 "blocks": 14336, 00:30:28.743 "percent": 22 00:30:28.743 } 00:30:28.743 }, 00:30:28.743 "base_bdevs_list": [ 00:30:28.743 { 00:30:28.743 "name": "spare", 00:30:28.743 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:28.743 "is_configured": true, 00:30:28.743 "data_offset": 2048, 00:30:28.743 "data_size": 63488 00:30:28.743 }, 00:30:28.743 { 00:30:28.743 "name": "BaseBdev2", 00:30:28.743 "uuid": "2af74197-6355-5981-b07c-9124ebea42c2", 00:30:28.743 "is_configured": true, 00:30:28.743 "data_offset": 2048, 00:30:28.743 "data_size": 63488 00:30:28.743 }, 00:30:28.743 { 00:30:28.743 "name": "BaseBdev3", 00:30:28.743 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:28.743 "is_configured": true, 00:30:28.743 "data_offset": 2048, 00:30:28.743 "data_size": 63488 00:30:28.743 }, 00:30:28.743 { 00:30:28.743 "name": "BaseBdev4", 00:30:28.743 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:28.743 "is_configured": true, 00:30:28.743 "data_offset": 2048, 00:30:28.743 "data_size": 63488 00:30:28.743 } 00:30:28.743 ] 00:30:28.743 }' 00:30:28.743 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:28.743 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:28.743 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:28.743 [2024-07-25 18:57:29.244883] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:28.743 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:28.743 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:29.002 [2024-07-25 18:57:29.477093] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:29.002 [2024-07-25 18:57:29.478917] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:29.002 [2024-07-25 18:57:29.486277] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:29.260 [2024-07-25 18:57:29.596191] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:29.260 [2024-07-25 18:57:29.716469] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:29.260 [2024-07-25 18:57:29.728621] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:29.260 [2024-07-25 18:57:29.728806] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:29.260 [2024-07-25 18:57:29.728848] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:29.260 [2024-07-25 18:57:29.759443] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:30:29.260 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:29.260 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:29.260 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.261 18:57:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.519 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.519 "name": "raid_bdev1", 00:30:29.519 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:29.519 "strip_size_kb": 0, 00:30:29.519 "state": "online", 00:30:29.519 "raid_level": "raid1", 00:30:29.519 "superblock": true, 00:30:29.519 "num_base_bdevs": 4, 00:30:29.519 "num_base_bdevs_discovered": 3, 00:30:29.519 "num_base_bdevs_operational": 3, 00:30:29.519 "base_bdevs_list": [ 00:30:29.519 { 00:30:29.519 "name": null, 00:30:29.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.519 "is_configured": false, 00:30:29.519 "data_offset": 2048, 00:30:29.519 "data_size": 63488 00:30:29.519 }, 00:30:29.519 { 00:30:29.519 "name": "BaseBdev2", 00:30:29.519 "uuid": "2af74197-6355-5981-b07c-9124ebea42c2", 00:30:29.519 "is_configured": true, 00:30:29.519 "data_offset": 2048, 00:30:29.519 "data_size": 63488 00:30:29.519 }, 00:30:29.519 { 00:30:29.519 "name": "BaseBdev3", 00:30:29.519 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:29.519 "is_configured": true, 00:30:29.519 "data_offset": 2048, 00:30:29.519 "data_size": 63488 00:30:29.519 }, 00:30:29.519 { 00:30:29.519 "name": "BaseBdev4", 00:30:29.519 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:29.519 "is_configured": true, 00:30:29.519 "data_offset": 2048, 00:30:29.519 "data_size": 63488 00:30:29.519 } 00:30:29.519 ] 00:30:29.519 }' 00:30:29.519 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.519 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:30.086 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:30.086 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.086 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:30.086 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:30.086 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.086 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.086 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.344 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.344 "name": "raid_bdev1", 00:30:30.344 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:30.344 "strip_size_kb": 0, 00:30:30.344 "state": "online", 00:30:30.344 "raid_level": "raid1", 00:30:30.344 "superblock": true, 00:30:30.344 "num_base_bdevs": 4, 00:30:30.344 "num_base_bdevs_discovered": 3, 00:30:30.344 "num_base_bdevs_operational": 3, 00:30:30.344 "base_bdevs_list": [ 00:30:30.344 { 00:30:30.344 "name": null, 00:30:30.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.344 "is_configured": false, 00:30:30.344 "data_offset": 2048, 00:30:30.344 "data_size": 63488 00:30:30.344 }, 00:30:30.344 { 00:30:30.344 "name": "BaseBdev2", 00:30:30.344 "uuid": "2af74197-6355-5981-b07c-9124ebea42c2", 00:30:30.344 "is_configured": true, 00:30:30.344 "data_offset": 2048, 00:30:30.344 "data_size": 63488 00:30:30.344 }, 00:30:30.344 { 00:30:30.344 "name": "BaseBdev3", 00:30:30.344 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:30.344 "is_configured": true, 00:30:30.344 "data_offset": 2048, 00:30:30.344 "data_size": 63488 00:30:30.344 }, 00:30:30.344 { 00:30:30.344 "name": "BaseBdev4", 00:30:30.344 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:30.344 "is_configured": true, 00:30:30.344 "data_offset": 2048, 00:30:30.344 "data_size": 63488 00:30:30.344 } 00:30:30.344 ] 00:30:30.344 }' 00:30:30.344 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.344 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:30.344 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:30.344 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:30.344 18:57:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:30.602 [2024-07-25 18:57:31.053814] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:30.602 18:57:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:30.602 [2024-07-25 18:57:31.110414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:30.602 [2024-07-25 18:57:31.112977] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:30.860 [2024-07-25 18:57:31.249181] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:30.860 [2024-07-25 18:57:31.250866] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:31.118 [2024-07-25 18:57:31.487144] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:31.118 [2024-07-25 18:57:31.488145] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:31.375 [2024-07-25 18:57:31.825092] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:31.633 [2024-07-25 18:57:31.965024] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:31.633 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:31.633 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:31.633 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:31.634 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:31.634 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:31.634 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.634 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.891 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:31.891 "name": "raid_bdev1", 00:30:31.891 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:31.891 "strip_size_kb": 0, 00:30:31.891 "state": "online", 00:30:31.891 "raid_level": "raid1", 00:30:31.891 "superblock": true, 00:30:31.891 "num_base_bdevs": 4, 00:30:31.891 "num_base_bdevs_discovered": 4, 00:30:31.891 "num_base_bdevs_operational": 4, 00:30:31.891 "process": { 00:30:31.891 "type": "rebuild", 00:30:31.891 "target": "spare", 00:30:31.891 "progress": { 00:30:31.891 "blocks": 14336, 00:30:31.891 "percent": 22 00:30:31.891 } 00:30:31.891 }, 00:30:31.891 "base_bdevs_list": [ 00:30:31.891 { 00:30:31.891 "name": "spare", 00:30:31.891 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:31.891 "is_configured": true, 00:30:31.891 "data_offset": 2048, 00:30:31.891 "data_size": 63488 00:30:31.891 }, 00:30:31.891 { 00:30:31.891 "name": "BaseBdev2", 00:30:31.891 "uuid": "2af74197-6355-5981-b07c-9124ebea42c2", 00:30:31.891 "is_configured": true, 00:30:31.891 "data_offset": 2048, 00:30:31.891 "data_size": 63488 00:30:31.891 }, 00:30:31.891 { 00:30:31.891 "name": "BaseBdev3", 00:30:31.891 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:31.891 "is_configured": true, 00:30:31.891 "data_offset": 2048, 00:30:31.891 "data_size": 63488 00:30:31.891 }, 00:30:31.891 { 00:30:31.891 "name": "BaseBdev4", 00:30:31.891 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:31.891 "is_configured": true, 00:30:31.891 "data_offset": 2048, 00:30:31.891 "data_size": 63488 00:30:31.891 } 00:30:31.891 ] 00:30:31.891 }' 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:30:31.892 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:30:31.892 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:32.149 [2024-07-25 18:57:32.567738] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:32.149 [2024-07-25 18:57:32.643858] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:32.407 [2024-07-25 18:57:32.847040] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:30:32.407 [2024-07-25 18:57:32.847230] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:30:32.407 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:30:32.407 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:30:32.407 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.408 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.408 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:32.408 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:32.408 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.408 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.408 18:57:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.666 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.666 "name": "raid_bdev1", 00:30:32.666 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:32.666 "strip_size_kb": 0, 00:30:32.666 "state": "online", 00:30:32.666 "raid_level": "raid1", 00:30:32.666 "superblock": true, 00:30:32.666 "num_base_bdevs": 4, 00:30:32.666 "num_base_bdevs_discovered": 3, 00:30:32.666 "num_base_bdevs_operational": 3, 00:30:32.666 "process": { 00:30:32.666 "type": "rebuild", 00:30:32.666 "target": "spare", 00:30:32.666 "progress": { 00:30:32.666 "blocks": 26624, 00:30:32.666 "percent": 41 00:30:32.666 } 00:30:32.666 }, 00:30:32.666 "base_bdevs_list": [ 00:30:32.666 { 00:30:32.666 "name": "spare", 00:30:32.666 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:32.666 "is_configured": true, 00:30:32.666 "data_offset": 2048, 00:30:32.666 "data_size": 63488 00:30:32.666 }, 00:30:32.666 { 00:30:32.666 "name": null, 00:30:32.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.666 "is_configured": false, 00:30:32.666 "data_offset": 2048, 00:30:32.666 "data_size": 63488 00:30:32.666 }, 00:30:32.666 { 00:30:32.666 "name": "BaseBdev3", 00:30:32.666 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:32.666 "is_configured": true, 00:30:32.666 "data_offset": 2048, 00:30:32.666 "data_size": 63488 00:30:32.666 }, 00:30:32.666 { 00:30:32.666 "name": "BaseBdev4", 00:30:32.666 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:32.666 "is_configured": true, 00:30:32.666 "data_offset": 2048, 00:30:32.666 "data_size": 63488 00:30:32.666 } 00:30:32.666 ] 00:30:32.666 }' 00:30:32.666 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.666 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:32.666 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.925 [2024-07-25 18:57:33.250099] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=990 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.925 "name": "raid_bdev1", 00:30:32.925 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:32.925 "strip_size_kb": 0, 00:30:32.925 "state": "online", 00:30:32.925 "raid_level": "raid1", 00:30:32.925 "superblock": true, 00:30:32.925 "num_base_bdevs": 4, 00:30:32.925 "num_base_bdevs_discovered": 3, 00:30:32.925 "num_base_bdevs_operational": 3, 00:30:32.925 "process": { 00:30:32.925 "type": "rebuild", 00:30:32.925 "target": "spare", 00:30:32.925 "progress": { 00:30:32.925 "blocks": 30720, 00:30:32.925 "percent": 48 00:30:32.925 } 00:30:32.925 }, 00:30:32.925 "base_bdevs_list": [ 00:30:32.925 { 00:30:32.925 "name": "spare", 00:30:32.925 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:32.925 "is_configured": true, 00:30:32.925 "data_offset": 2048, 00:30:32.925 "data_size": 63488 00:30:32.925 }, 00:30:32.925 { 00:30:32.925 "name": null, 00:30:32.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.925 "is_configured": false, 00:30:32.925 "data_offset": 2048, 00:30:32.925 "data_size": 63488 00:30:32.925 }, 00:30:32.925 { 00:30:32.925 "name": "BaseBdev3", 00:30:32.925 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:32.925 "is_configured": true, 00:30:32.925 "data_offset": 2048, 00:30:32.925 "data_size": 63488 00:30:32.925 }, 00:30:32.925 { 00:30:32.925 "name": "BaseBdev4", 00:30:32.925 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:32.925 "is_configured": true, 00:30:32.925 "data_offset": 2048, 00:30:32.925 "data_size": 63488 00:30:32.925 } 00:30:32.925 ] 00:30:32.925 }' 00:30:32.925 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:33.182 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:33.182 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:33.182 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:33.182 18:57:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:33.747 [2024-07-25 18:57:34.280287] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:34.004 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.261 [2024-07-25 18:57:34.615571] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:34.261 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:34.261 "name": "raid_bdev1", 00:30:34.261 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:34.261 "strip_size_kb": 0, 00:30:34.261 "state": "online", 00:30:34.261 "raid_level": "raid1", 00:30:34.261 "superblock": true, 00:30:34.261 "num_base_bdevs": 4, 00:30:34.261 "num_base_bdevs_discovered": 3, 00:30:34.261 "num_base_bdevs_operational": 3, 00:30:34.261 "process": { 00:30:34.261 "type": "rebuild", 00:30:34.261 "target": "spare", 00:30:34.261 "progress": { 00:30:34.261 "blocks": 53248, 00:30:34.261 "percent": 83 00:30:34.261 } 00:30:34.261 }, 00:30:34.261 "base_bdevs_list": [ 00:30:34.261 { 00:30:34.261 "name": "spare", 00:30:34.261 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:34.261 "is_configured": true, 00:30:34.261 "data_offset": 2048, 00:30:34.261 "data_size": 63488 00:30:34.261 }, 00:30:34.261 { 00:30:34.261 "name": null, 00:30:34.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.261 "is_configured": false, 00:30:34.261 "data_offset": 2048, 00:30:34.261 "data_size": 63488 00:30:34.261 }, 00:30:34.261 { 00:30:34.261 "name": "BaseBdev3", 00:30:34.261 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:34.261 "is_configured": true, 00:30:34.261 "data_offset": 2048, 00:30:34.261 "data_size": 63488 00:30:34.261 }, 00:30:34.261 { 00:30:34.261 "name": "BaseBdev4", 00:30:34.261 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:34.261 "is_configured": true, 00:30:34.261 "data_offset": 2048, 00:30:34.261 "data_size": 63488 00:30:34.261 } 00:30:34.261 ] 00:30:34.261 }' 00:30:34.261 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:34.519 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:34.519 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:34.519 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:34.519 18:57:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:34.778 [2024-07-25 18:57:35.276099] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:35.036 [2024-07-25 18:57:35.376087] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:35.036 [2024-07-25 18:57:35.379698] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.603 18:57:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.603 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:35.603 "name": "raid_bdev1", 00:30:35.603 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:35.603 "strip_size_kb": 0, 00:30:35.603 "state": "online", 00:30:35.603 "raid_level": "raid1", 00:30:35.603 "superblock": true, 00:30:35.603 "num_base_bdevs": 4, 00:30:35.603 "num_base_bdevs_discovered": 3, 00:30:35.603 "num_base_bdevs_operational": 3, 00:30:35.603 "base_bdevs_list": [ 00:30:35.603 { 00:30:35.603 "name": "spare", 00:30:35.603 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:35.603 "is_configured": true, 00:30:35.603 "data_offset": 2048, 00:30:35.603 "data_size": 63488 00:30:35.603 }, 00:30:35.603 { 00:30:35.603 "name": null, 00:30:35.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.603 "is_configured": false, 00:30:35.603 "data_offset": 2048, 00:30:35.603 "data_size": 63488 00:30:35.603 }, 00:30:35.603 { 00:30:35.603 "name": "BaseBdev3", 00:30:35.603 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:35.603 "is_configured": true, 00:30:35.603 "data_offset": 2048, 00:30:35.603 "data_size": 63488 00:30:35.603 }, 00:30:35.603 { 00:30:35.603 "name": "BaseBdev4", 00:30:35.603 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:35.603 "is_configured": true, 00:30:35.603 "data_offset": 2048, 00:30:35.603 "data_size": 63488 00:30:35.603 } 00:30:35.603 ] 00:30:35.603 }' 00:30:35.603 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:35.603 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:35.603 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.862 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:36.121 "name": "raid_bdev1", 00:30:36.121 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:36.121 "strip_size_kb": 0, 00:30:36.121 "state": "online", 00:30:36.121 "raid_level": "raid1", 00:30:36.121 "superblock": true, 00:30:36.121 "num_base_bdevs": 4, 00:30:36.121 "num_base_bdevs_discovered": 3, 00:30:36.121 "num_base_bdevs_operational": 3, 00:30:36.121 "base_bdevs_list": [ 00:30:36.121 { 00:30:36.121 "name": "spare", 00:30:36.121 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:36.121 "is_configured": true, 00:30:36.121 "data_offset": 2048, 00:30:36.121 "data_size": 63488 00:30:36.121 }, 00:30:36.121 { 00:30:36.121 "name": null, 00:30:36.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.121 "is_configured": false, 00:30:36.121 "data_offset": 2048, 00:30:36.121 "data_size": 63488 00:30:36.121 }, 00:30:36.121 { 00:30:36.121 "name": "BaseBdev3", 00:30:36.121 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:36.121 "is_configured": true, 00:30:36.121 "data_offset": 2048, 00:30:36.121 "data_size": 63488 00:30:36.121 }, 00:30:36.121 { 00:30:36.121 "name": "BaseBdev4", 00:30:36.121 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:36.121 "is_configured": true, 00:30:36.121 "data_offset": 2048, 00:30:36.121 "data_size": 63488 00:30:36.121 } 00:30:36.121 ] 00:30:36.121 }' 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.121 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.380 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:36.380 "name": "raid_bdev1", 00:30:36.380 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:36.380 "strip_size_kb": 0, 00:30:36.380 "state": "online", 00:30:36.380 "raid_level": "raid1", 00:30:36.380 "superblock": true, 00:30:36.380 "num_base_bdevs": 4, 00:30:36.380 "num_base_bdevs_discovered": 3, 00:30:36.380 "num_base_bdevs_operational": 3, 00:30:36.380 "base_bdevs_list": [ 00:30:36.380 { 00:30:36.380 "name": "spare", 00:30:36.380 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:36.380 "is_configured": true, 00:30:36.380 "data_offset": 2048, 00:30:36.380 "data_size": 63488 00:30:36.380 }, 00:30:36.380 { 00:30:36.380 "name": null, 00:30:36.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.380 "is_configured": false, 00:30:36.380 "data_offset": 2048, 00:30:36.380 "data_size": 63488 00:30:36.380 }, 00:30:36.380 { 00:30:36.380 "name": "BaseBdev3", 00:30:36.380 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:36.380 "is_configured": true, 00:30:36.380 "data_offset": 2048, 00:30:36.380 "data_size": 63488 00:30:36.380 }, 00:30:36.380 { 00:30:36.380 "name": "BaseBdev4", 00:30:36.380 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:36.380 "is_configured": true, 00:30:36.380 "data_offset": 2048, 00:30:36.380 "data_size": 63488 00:30:36.380 } 00:30:36.380 ] 00:30:36.380 }' 00:30:36.380 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:36.380 18:57:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.948 18:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:37.206 [2024-07-25 18:57:37.713239] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:37.206 [2024-07-25 18:57:37.713489] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:37.206 00:30:37.206 Latency(us) 00:30:37.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.206 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:37.206 raid_bdev1 : 10.87 98.39 295.16 0.00 0.00 14656.14 298.42 115343.36 00:30:37.206 =================================================================================================================== 00:30:37.207 Total : 98.39 295.16 0.00 0.00 14656.14 298.42 115343.36 00:30:37.465 [2024-07-25 18:57:37.788080] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:37.465 [2024-07-25 18:57:37.788245] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:37.465 [2024-07-25 18:57:37.788383] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:37.465 0 00:30:37.465 [2024-07-25 18:57:37.788597] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:30:37.465 18:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:30:37.465 18:57:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:37.724 /dev/nbd0 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:37.724 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:37.983 1+0 records in 00:30:37.983 1+0 records out 00:30:37.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510773 s, 8.0 MB/s 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:37.983 /dev/nbd1 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:37.983 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:37.983 1+0 records in 00:30:37.983 1+0 records out 00:30:37.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052425 s, 7.8 MB/s 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.984 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:38.243 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:38.243 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:38.243 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:38.243 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:38.243 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:38.243 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:38.243 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:38.502 18:57:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:38.761 /dev/nbd1 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:38.761 1+0 records in 00:30:38.761 1+0 records out 00:30:38.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594404 s, 6.9 MB/s 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:38.761 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:39.021 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:39.021 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:39.021 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:39.021 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:39.021 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:39.021 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:39.021 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:39.280 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:30:39.539 18:57:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:39.798 [2024-07-25 18:57:40.294664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:39.798 [2024-07-25 18:57:40.294888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:39.798 [2024-07-25 18:57:40.294979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:30:39.798 [2024-07-25 18:57:40.295087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:39.798 [2024-07-25 18:57:40.297770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:39.798 [2024-07-25 18:57:40.297950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:39.798 [2024-07-25 18:57:40.298149] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:39.798 [2024-07-25 18:57:40.298350] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:39.798 [2024-07-25 18:57:40.298537] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:39.798 [2024-07-25 18:57:40.298739] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:39.798 spare 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.798 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.056 [2024-07-25 18:57:40.398946] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:30:40.056 [2024-07-25 18:57:40.399079] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:40.056 [2024-07-25 18:57:40.399228] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000373d0 00:30:40.056 [2024-07-25 18:57:40.399663] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:30:40.056 [2024-07-25 18:57:40.399761] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:30:40.056 [2024-07-25 18:57:40.399964] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:40.056 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.056 "name": "raid_bdev1", 00:30:40.056 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:40.056 "strip_size_kb": 0, 00:30:40.056 "state": "online", 00:30:40.056 "raid_level": "raid1", 00:30:40.056 "superblock": true, 00:30:40.056 "num_base_bdevs": 4, 00:30:40.056 "num_base_bdevs_discovered": 3, 00:30:40.056 "num_base_bdevs_operational": 3, 00:30:40.056 "base_bdevs_list": [ 00:30:40.056 { 00:30:40.056 "name": "spare", 00:30:40.056 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:40.056 "is_configured": true, 00:30:40.056 "data_offset": 2048, 00:30:40.056 "data_size": 63488 00:30:40.056 }, 00:30:40.056 { 00:30:40.056 "name": null, 00:30:40.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.056 "is_configured": false, 00:30:40.056 "data_offset": 2048, 00:30:40.056 "data_size": 63488 00:30:40.056 }, 00:30:40.056 { 00:30:40.056 "name": "BaseBdev3", 00:30:40.056 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:40.056 "is_configured": true, 00:30:40.056 "data_offset": 2048, 00:30:40.056 "data_size": 63488 00:30:40.056 }, 00:30:40.056 { 00:30:40.056 "name": "BaseBdev4", 00:30:40.056 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:40.056 "is_configured": true, 00:30:40.056 "data_offset": 2048, 00:30:40.056 "data_size": 63488 00:30:40.056 } 00:30:40.056 ] 00:30:40.056 }' 00:30:40.056 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.056 18:57:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.660 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:40.660 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:40.660 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:40.660 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:40.660 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:40.660 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.660 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.942 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:40.942 "name": "raid_bdev1", 00:30:40.942 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:40.942 "strip_size_kb": 0, 00:30:40.942 "state": "online", 00:30:40.942 "raid_level": "raid1", 00:30:40.942 "superblock": true, 00:30:40.942 "num_base_bdevs": 4, 00:30:40.942 "num_base_bdevs_discovered": 3, 00:30:40.942 "num_base_bdevs_operational": 3, 00:30:40.942 "base_bdevs_list": [ 00:30:40.942 { 00:30:40.942 "name": "spare", 00:30:40.942 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:40.942 "is_configured": true, 00:30:40.942 "data_offset": 2048, 00:30:40.942 "data_size": 63488 00:30:40.942 }, 00:30:40.942 { 00:30:40.942 "name": null, 00:30:40.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.942 "is_configured": false, 00:30:40.942 "data_offset": 2048, 00:30:40.942 "data_size": 63488 00:30:40.942 }, 00:30:40.942 { 00:30:40.942 "name": "BaseBdev3", 00:30:40.942 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:40.942 "is_configured": true, 00:30:40.942 "data_offset": 2048, 00:30:40.942 "data_size": 63488 00:30:40.942 }, 00:30:40.942 { 00:30:40.942 "name": "BaseBdev4", 00:30:40.942 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:40.942 "is_configured": true, 00:30:40.942 "data_offset": 2048, 00:30:40.942 "data_size": 63488 00:30:40.942 } 00:30:40.942 ] 00:30:40.942 }' 00:30:40.942 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:40.942 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:40.942 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:40.942 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:40.942 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.942 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:41.199 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:41.200 [2024-07-25 18:57:41.707802] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.200 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.458 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.458 "name": "raid_bdev1", 00:30:41.458 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:41.458 "strip_size_kb": 0, 00:30:41.458 "state": "online", 00:30:41.458 "raid_level": "raid1", 00:30:41.458 "superblock": true, 00:30:41.458 "num_base_bdevs": 4, 00:30:41.458 "num_base_bdevs_discovered": 2, 00:30:41.458 "num_base_bdevs_operational": 2, 00:30:41.458 "base_bdevs_list": [ 00:30:41.458 { 00:30:41.458 "name": null, 00:30:41.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.458 "is_configured": false, 00:30:41.458 "data_offset": 2048, 00:30:41.458 "data_size": 63488 00:30:41.458 }, 00:30:41.458 { 00:30:41.458 "name": null, 00:30:41.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.458 "is_configured": false, 00:30:41.458 "data_offset": 2048, 00:30:41.458 "data_size": 63488 00:30:41.458 }, 00:30:41.458 { 00:30:41.458 "name": "BaseBdev3", 00:30:41.458 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:41.458 "is_configured": true, 00:30:41.458 "data_offset": 2048, 00:30:41.458 "data_size": 63488 00:30:41.458 }, 00:30:41.458 { 00:30:41.458 "name": "BaseBdev4", 00:30:41.458 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:41.458 "is_configured": true, 00:30:41.458 "data_offset": 2048, 00:30:41.458 "data_size": 63488 00:30:41.458 } 00:30:41.458 ] 00:30:41.458 }' 00:30:41.458 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.458 18:57:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:42.023 18:57:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:42.281 [2024-07-25 18:57:42.640076] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:42.281 [2024-07-25 18:57:42.640460] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:42.281 [2024-07-25 18:57:42.640578] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:42.281 [2024-07-25 18:57:42.640677] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:42.281 [2024-07-25 18:57:42.657036] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:30:42.281 [2024-07-25 18:57:42.659452] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:42.281 18:57:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:30:43.213 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:43.213 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:43.213 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:43.213 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:43.213 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:43.213 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.213 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.471 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.471 "name": "raid_bdev1", 00:30:43.471 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:43.471 "strip_size_kb": 0, 00:30:43.471 "state": "online", 00:30:43.471 "raid_level": "raid1", 00:30:43.471 "superblock": true, 00:30:43.471 "num_base_bdevs": 4, 00:30:43.471 "num_base_bdevs_discovered": 3, 00:30:43.471 "num_base_bdevs_operational": 3, 00:30:43.471 "process": { 00:30:43.471 "type": "rebuild", 00:30:43.471 "target": "spare", 00:30:43.471 "progress": { 00:30:43.471 "blocks": 24576, 00:30:43.471 "percent": 38 00:30:43.471 } 00:30:43.471 }, 00:30:43.471 "base_bdevs_list": [ 00:30:43.471 { 00:30:43.471 "name": "spare", 00:30:43.471 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:43.471 "is_configured": true, 00:30:43.471 "data_offset": 2048, 00:30:43.471 "data_size": 63488 00:30:43.471 }, 00:30:43.471 { 00:30:43.471 "name": null, 00:30:43.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.471 "is_configured": false, 00:30:43.471 "data_offset": 2048, 00:30:43.471 "data_size": 63488 00:30:43.471 }, 00:30:43.471 { 00:30:43.471 "name": "BaseBdev3", 00:30:43.471 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:43.471 "is_configured": true, 00:30:43.471 "data_offset": 2048, 00:30:43.471 "data_size": 63488 00:30:43.471 }, 00:30:43.471 { 00:30:43.471 "name": "BaseBdev4", 00:30:43.471 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:43.471 "is_configured": true, 00:30:43.471 "data_offset": 2048, 00:30:43.471 "data_size": 63488 00:30:43.471 } 00:30:43.471 ] 00:30:43.471 }' 00:30:43.471 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:43.471 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:43.471 18:57:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:43.471 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:43.471 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:43.729 [2024-07-25 18:57:44.242112] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:43.729 [2024-07-25 18:57:44.272264] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:43.729 [2024-07-25 18:57:44.272473] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:43.729 [2024-07-25 18:57:44.272523] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:43.729 [2024-07-25 18:57:44.272599] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.987 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.245 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:44.245 "name": "raid_bdev1", 00:30:44.245 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:44.245 "strip_size_kb": 0, 00:30:44.245 "state": "online", 00:30:44.245 "raid_level": "raid1", 00:30:44.245 "superblock": true, 00:30:44.245 "num_base_bdevs": 4, 00:30:44.245 "num_base_bdevs_discovered": 2, 00:30:44.245 "num_base_bdevs_operational": 2, 00:30:44.245 "base_bdevs_list": [ 00:30:44.245 { 00:30:44.245 "name": null, 00:30:44.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.245 "is_configured": false, 00:30:44.245 "data_offset": 2048, 00:30:44.245 "data_size": 63488 00:30:44.245 }, 00:30:44.245 { 00:30:44.245 "name": null, 00:30:44.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.245 "is_configured": false, 00:30:44.245 "data_offset": 2048, 00:30:44.245 "data_size": 63488 00:30:44.245 }, 00:30:44.245 { 00:30:44.245 "name": "BaseBdev3", 00:30:44.245 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:44.245 "is_configured": true, 00:30:44.245 "data_offset": 2048, 00:30:44.245 "data_size": 63488 00:30:44.245 }, 00:30:44.245 { 00:30:44.245 "name": "BaseBdev4", 00:30:44.245 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:44.245 "is_configured": true, 00:30:44.245 "data_offset": 2048, 00:30:44.245 "data_size": 63488 00:30:44.245 } 00:30:44.245 ] 00:30:44.245 }' 00:30:44.245 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:44.245 18:57:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.811 18:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:44.811 [2024-07-25 18:57:45.322266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:44.811 [2024-07-25 18:57:45.322663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.811 [2024-07-25 18:57:45.322747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:30:44.811 [2024-07-25 18:57:45.322864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.811 [2024-07-25 18:57:45.323495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.811 [2024-07-25 18:57:45.323636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:44.811 [2024-07-25 18:57:45.323858] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:44.811 [2024-07-25 18:57:45.323947] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:44.811 [2024-07-25 18:57:45.324033] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:44.811 [2024-07-25 18:57:45.324106] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:44.811 [2024-07-25 18:57:45.340504] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000378b0 00:30:44.811 spare 00:30:44.811 [2024-07-25 18:57:45.342887] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:44.811 18:57:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.187 "name": "raid_bdev1", 00:30:46.187 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:46.187 "strip_size_kb": 0, 00:30:46.187 "state": "online", 00:30:46.187 "raid_level": "raid1", 00:30:46.187 "superblock": true, 00:30:46.187 "num_base_bdevs": 4, 00:30:46.187 "num_base_bdevs_discovered": 3, 00:30:46.187 "num_base_bdevs_operational": 3, 00:30:46.187 "process": { 00:30:46.187 "type": "rebuild", 00:30:46.187 "target": "spare", 00:30:46.187 "progress": { 00:30:46.187 "blocks": 24576, 00:30:46.187 "percent": 38 00:30:46.187 } 00:30:46.187 }, 00:30:46.187 "base_bdevs_list": [ 00:30:46.187 { 00:30:46.187 "name": "spare", 00:30:46.187 "uuid": "25223a37-8a8a-5aed-99fa-8fafb56a80f5", 00:30:46.187 "is_configured": true, 00:30:46.187 "data_offset": 2048, 00:30:46.187 "data_size": 63488 00:30:46.187 }, 00:30:46.187 { 00:30:46.187 "name": null, 00:30:46.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.187 "is_configured": false, 00:30:46.187 "data_offset": 2048, 00:30:46.187 "data_size": 63488 00:30:46.187 }, 00:30:46.187 { 00:30:46.187 "name": "BaseBdev3", 00:30:46.187 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:46.187 "is_configured": true, 00:30:46.187 "data_offset": 2048, 00:30:46.187 "data_size": 63488 00:30:46.187 }, 00:30:46.187 { 00:30:46.187 "name": "BaseBdev4", 00:30:46.187 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:46.187 "is_configured": true, 00:30:46.187 "data_offset": 2048, 00:30:46.187 "data_size": 63488 00:30:46.187 } 00:30:46.187 ] 00:30:46.187 }' 00:30:46.187 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:46.188 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:46.188 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.188 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:46.188 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:46.446 [2024-07-25 18:57:46.901399] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:46.446 [2024-07-25 18:57:46.955506] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:46.446 [2024-07-25 18:57:46.955771] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:46.446 [2024-07-25 18:57:46.955822] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:46.446 [2024-07-25 18:57:46.955893] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:46.446 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.447 18:57:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.705 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:46.705 "name": "raid_bdev1", 00:30:46.705 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:46.705 "strip_size_kb": 0, 00:30:46.705 "state": "online", 00:30:46.705 "raid_level": "raid1", 00:30:46.705 "superblock": true, 00:30:46.706 "num_base_bdevs": 4, 00:30:46.706 "num_base_bdevs_discovered": 2, 00:30:46.706 "num_base_bdevs_operational": 2, 00:30:46.706 "base_bdevs_list": [ 00:30:46.706 { 00:30:46.706 "name": null, 00:30:46.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.706 "is_configured": false, 00:30:46.706 "data_offset": 2048, 00:30:46.706 "data_size": 63488 00:30:46.706 }, 00:30:46.706 { 00:30:46.706 "name": null, 00:30:46.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.706 "is_configured": false, 00:30:46.706 "data_offset": 2048, 00:30:46.706 "data_size": 63488 00:30:46.706 }, 00:30:46.706 { 00:30:46.706 "name": "BaseBdev3", 00:30:46.706 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:46.706 "is_configured": true, 00:30:46.706 "data_offset": 2048, 00:30:46.706 "data_size": 63488 00:30:46.706 }, 00:30:46.706 { 00:30:46.706 "name": "BaseBdev4", 00:30:46.706 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:46.706 "is_configured": true, 00:30:46.706 "data_offset": 2048, 00:30:46.706 "data_size": 63488 00:30:46.706 } 00:30:46.706 ] 00:30:46.706 }' 00:30:46.706 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:46.706 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.273 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:47.273 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.273 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:47.273 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:47.273 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.273 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.273 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.532 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.532 "name": "raid_bdev1", 00:30:47.532 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:47.532 "strip_size_kb": 0, 00:30:47.532 "state": "online", 00:30:47.532 "raid_level": "raid1", 00:30:47.532 "superblock": true, 00:30:47.532 "num_base_bdevs": 4, 00:30:47.532 "num_base_bdevs_discovered": 2, 00:30:47.532 "num_base_bdevs_operational": 2, 00:30:47.532 "base_bdevs_list": [ 00:30:47.532 { 00:30:47.532 "name": null, 00:30:47.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.532 "is_configured": false, 00:30:47.532 "data_offset": 2048, 00:30:47.532 "data_size": 63488 00:30:47.532 }, 00:30:47.532 { 00:30:47.532 "name": null, 00:30:47.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.532 "is_configured": false, 00:30:47.532 "data_offset": 2048, 00:30:47.532 "data_size": 63488 00:30:47.532 }, 00:30:47.532 { 00:30:47.532 "name": "BaseBdev3", 00:30:47.532 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:47.532 "is_configured": true, 00:30:47.532 "data_offset": 2048, 00:30:47.532 "data_size": 63488 00:30:47.532 }, 00:30:47.532 { 00:30:47.532 "name": "BaseBdev4", 00:30:47.532 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:47.532 "is_configured": true, 00:30:47.532 "data_offset": 2048, 00:30:47.532 "data_size": 63488 00:30:47.532 } 00:30:47.532 ] 00:30:47.532 }' 00:30:47.532 18:57:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.532 18:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:47.532 18:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.532 18:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:47.532 18:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:47.791 18:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:48.049 [2024-07-25 18:57:48.512675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:48.049 [2024-07-25 18:57:48.512971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:48.049 [2024-07-25 18:57:48.513050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:30:48.049 [2024-07-25 18:57:48.513156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:48.049 [2024-07-25 18:57:48.513711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:48.049 [2024-07-25 18:57:48.513868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:48.049 [2024-07-25 18:57:48.514100] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:48.049 [2024-07-25 18:57:48.514191] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:48.049 [2024-07-25 18:57:48.514303] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:48.049 BaseBdev1 00:30:48.049 18:57:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:30:48.984 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.985 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.243 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:49.243 "name": "raid_bdev1", 00:30:49.243 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:49.243 "strip_size_kb": 0, 00:30:49.243 "state": "online", 00:30:49.243 "raid_level": "raid1", 00:30:49.243 "superblock": true, 00:30:49.243 "num_base_bdevs": 4, 00:30:49.243 "num_base_bdevs_discovered": 2, 00:30:49.243 "num_base_bdevs_operational": 2, 00:30:49.243 "base_bdevs_list": [ 00:30:49.243 { 00:30:49.243 "name": null, 00:30:49.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.243 "is_configured": false, 00:30:49.243 "data_offset": 2048, 00:30:49.243 "data_size": 63488 00:30:49.243 }, 00:30:49.243 { 00:30:49.243 "name": null, 00:30:49.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.243 "is_configured": false, 00:30:49.243 "data_offset": 2048, 00:30:49.243 "data_size": 63488 00:30:49.243 }, 00:30:49.243 { 00:30:49.243 "name": "BaseBdev3", 00:30:49.243 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:49.243 "is_configured": true, 00:30:49.243 "data_offset": 2048, 00:30:49.243 "data_size": 63488 00:30:49.243 }, 00:30:49.243 { 00:30:49.243 "name": "BaseBdev4", 00:30:49.243 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:49.243 "is_configured": true, 00:30:49.243 "data_offset": 2048, 00:30:49.243 "data_size": 63488 00:30:49.243 } 00:30:49.243 ] 00:30:49.243 }' 00:30:49.243 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:49.243 18:57:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:49.810 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:49.810 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:49.810 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:49.810 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:49.810 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:49.810 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.810 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:50.067 "name": "raid_bdev1", 00:30:50.067 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:50.067 "strip_size_kb": 0, 00:30:50.067 "state": "online", 00:30:50.067 "raid_level": "raid1", 00:30:50.067 "superblock": true, 00:30:50.067 "num_base_bdevs": 4, 00:30:50.067 "num_base_bdevs_discovered": 2, 00:30:50.067 "num_base_bdevs_operational": 2, 00:30:50.067 "base_bdevs_list": [ 00:30:50.067 { 00:30:50.067 "name": null, 00:30:50.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.067 "is_configured": false, 00:30:50.067 "data_offset": 2048, 00:30:50.067 "data_size": 63488 00:30:50.067 }, 00:30:50.067 { 00:30:50.067 "name": null, 00:30:50.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.067 "is_configured": false, 00:30:50.067 "data_offset": 2048, 00:30:50.067 "data_size": 63488 00:30:50.067 }, 00:30:50.067 { 00:30:50.067 "name": "BaseBdev3", 00:30:50.067 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:50.067 "is_configured": true, 00:30:50.067 "data_offset": 2048, 00:30:50.067 "data_size": 63488 00:30:50.067 }, 00:30:50.067 { 00:30:50.067 "name": "BaseBdev4", 00:30:50.067 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:50.067 "is_configured": true, 00:30:50.067 "data_offset": 2048, 00:30:50.067 "data_size": 63488 00:30:50.067 } 00:30:50.067 ] 00:30:50.067 }' 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:50.067 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:50.326 [2024-07-25 18:57:50.845363] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:50.326 [2024-07-25 18:57:50.845732] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:50.326 [2024-07-25 18:57:50.845857] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:50.326 request: 00:30:50.326 { 00:30:50.326 "base_bdev": "BaseBdev1", 00:30:50.326 "raid_bdev": "raid_bdev1", 00:30:50.326 "method": "bdev_raid_add_base_bdev", 00:30:50.326 "req_id": 1 00:30:50.326 } 00:30:50.326 Got JSON-RPC error response 00:30:50.326 response: 00:30:50.326 { 00:30:50.326 "code": -22, 00:30:50.326 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:50.326 } 00:30:50.326 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:30:50.326 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:50.326 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:50.326 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:50.326 18:57:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.701 18:57:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.701 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.701 "name": "raid_bdev1", 00:30:51.701 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:51.701 "strip_size_kb": 0, 00:30:51.701 "state": "online", 00:30:51.701 "raid_level": "raid1", 00:30:51.701 "superblock": true, 00:30:51.701 "num_base_bdevs": 4, 00:30:51.701 "num_base_bdevs_discovered": 2, 00:30:51.701 "num_base_bdevs_operational": 2, 00:30:51.701 "base_bdevs_list": [ 00:30:51.701 { 00:30:51.701 "name": null, 00:30:51.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.701 "is_configured": false, 00:30:51.701 "data_offset": 2048, 00:30:51.701 "data_size": 63488 00:30:51.701 }, 00:30:51.701 { 00:30:51.701 "name": null, 00:30:51.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.701 "is_configured": false, 00:30:51.701 "data_offset": 2048, 00:30:51.701 "data_size": 63488 00:30:51.701 }, 00:30:51.701 { 00:30:51.701 "name": "BaseBdev3", 00:30:51.701 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:51.701 "is_configured": true, 00:30:51.702 "data_offset": 2048, 00:30:51.702 "data_size": 63488 00:30:51.702 }, 00:30:51.702 { 00:30:51.702 "name": "BaseBdev4", 00:30:51.702 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:51.702 "is_configured": true, 00:30:51.702 "data_offset": 2048, 00:30:51.702 "data_size": 63488 00:30:51.702 } 00:30:51.702 ] 00:30:51.702 }' 00:30:51.702 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.702 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.268 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:52.268 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.268 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:52.268 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:52.268 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.268 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.268 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.526 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.526 "name": "raid_bdev1", 00:30:52.526 "uuid": "ceec9031-5ef2-4b50-8754-3a1172aa91f1", 00:30:52.526 "strip_size_kb": 0, 00:30:52.526 "state": "online", 00:30:52.526 "raid_level": "raid1", 00:30:52.526 "superblock": true, 00:30:52.526 "num_base_bdevs": 4, 00:30:52.526 "num_base_bdevs_discovered": 2, 00:30:52.526 "num_base_bdevs_operational": 2, 00:30:52.526 "base_bdevs_list": [ 00:30:52.526 { 00:30:52.526 "name": null, 00:30:52.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.526 "is_configured": false, 00:30:52.526 "data_offset": 2048, 00:30:52.526 "data_size": 63488 00:30:52.526 }, 00:30:52.526 { 00:30:52.526 "name": null, 00:30:52.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.526 "is_configured": false, 00:30:52.526 "data_offset": 2048, 00:30:52.526 "data_size": 63488 00:30:52.526 }, 00:30:52.526 { 00:30:52.526 "name": "BaseBdev3", 00:30:52.526 "uuid": "74f362ad-2a11-5ae0-be32-91fbd45b06d5", 00:30:52.526 "is_configured": true, 00:30:52.526 "data_offset": 2048, 00:30:52.526 "data_size": 63488 00:30:52.526 }, 00:30:52.526 { 00:30:52.526 "name": "BaseBdev4", 00:30:52.526 "uuid": "c905bdde-16e0-5e8f-bcf0-a70317a1729b", 00:30:52.526 "is_configured": true, 00:30:52.526 "data_offset": 2048, 00:30:52.526 "data_size": 63488 00:30:52.526 } 00:30:52.526 ] 00:30:52.526 }' 00:30:52.526 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:52.526 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:52.526 18:57:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 148243 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 148243 ']' 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 148243 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 148243 00:30:52.526 killing process with pid 148243 00:30:52.526 Received shutdown signal, test time was about 26.139426 seconds 00:30:52.526 00:30:52.526 Latency(us) 00:30:52.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.526 =================================================================================================================== 00:30:52.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 148243' 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 148243 00:30:52.526 18:57:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 148243 00:30:52.526 [2024-07-25 18:57:53.039700] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:52.526 [2024-07-25 18:57:53.039843] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:52.526 [2024-07-25 18:57:53.039940] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:52.526 [2024-07-25 18:57:53.039990] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:30:53.091 [2024-07-25 18:57:53.505249] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:54.465 18:57:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:30:54.465 00:30:54.465 real 0m32.944s 00:30:54.465 user 0m50.076s 00:30:54.465 sys 0m4.673s 00:30:54.465 18:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:54.465 ************************************ 00:30:54.465 END TEST raid_rebuild_test_sb_io 00:30:54.465 ************************************ 00:30:54.465 18:57:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:54.724 18:57:55 bdev_raid -- bdev/bdev_raid.sh@964 -- # '[' y == y ']' 00:30:54.724 18:57:55 bdev_raid -- bdev/bdev_raid.sh@965 -- # for n in {3..4} 00:30:54.724 18:57:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:30:54.724 18:57:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:54.724 18:57:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:54.724 18:57:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:54.724 ************************************ 00:30:54.724 START TEST raid5f_state_function_test 00:30:54.724 ************************************ 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=149156 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 149156' 00:30:54.724 Process raid pid: 149156 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 149156 /var/tmp/spdk-raid.sock 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 149156 ']' 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:54.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:54.724 18:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.724 [2024-07-25 18:57:55.172210] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:54.725 [2024-07-25 18:57:55.172581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.983 [2024-07-25 18:57:55.332929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.983 [2024-07-25 18:57:55.530736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.241 [2024-07-25 18:57:55.725637] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:55.552 18:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.552 18:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:30:55.552 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:55.829 [2024-07-25 18:57:56.255638] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:55.829 [2024-07-25 18:57:56.255930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:55.829 [2024-07-25 18:57:56.256038] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:55.829 [2024-07-25 18:57:56.256102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:55.829 [2024-07-25 18:57:56.256181] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:55.829 [2024-07-25 18:57:56.256227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.829 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:56.088 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:56.088 "name": "Existed_Raid", 00:30:56.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.088 "strip_size_kb": 64, 00:30:56.088 "state": "configuring", 00:30:56.088 "raid_level": "raid5f", 00:30:56.088 "superblock": false, 00:30:56.088 "num_base_bdevs": 3, 00:30:56.088 "num_base_bdevs_discovered": 0, 00:30:56.088 "num_base_bdevs_operational": 3, 00:30:56.088 "base_bdevs_list": [ 00:30:56.088 { 00:30:56.088 "name": "BaseBdev1", 00:30:56.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.088 "is_configured": false, 00:30:56.088 "data_offset": 0, 00:30:56.088 "data_size": 0 00:30:56.088 }, 00:30:56.088 { 00:30:56.088 "name": "BaseBdev2", 00:30:56.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.088 "is_configured": false, 00:30:56.088 "data_offset": 0, 00:30:56.088 "data_size": 0 00:30:56.088 }, 00:30:56.088 { 00:30:56.088 "name": "BaseBdev3", 00:30:56.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.088 "is_configured": false, 00:30:56.088 "data_offset": 0, 00:30:56.088 "data_size": 0 00:30:56.088 } 00:30:56.088 ] 00:30:56.088 }' 00:30:56.088 18:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:56.088 18:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.654 18:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:56.912 [2024-07-25 18:57:57.303730] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:56.912 [2024-07-25 18:57:57.303884] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:30:56.912 18:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:57.170 [2024-07-25 18:57:57.563805] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:57.170 [2024-07-25 18:57:57.564032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:57.170 [2024-07-25 18:57:57.564130] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:57.170 [2024-07-25 18:57:57.564183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:57.170 [2024-07-25 18:57:57.564253] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:57.170 [2024-07-25 18:57:57.564306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:57.170 18:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:57.429 [2024-07-25 18:57:57.778301] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:57.429 BaseBdev1 00:30:57.429 18:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:57.429 18:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:57.429 18:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:57.429 18:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:57.429 18:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:57.429 18:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:57.429 18:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:57.688 [ 00:30:57.688 { 00:30:57.688 "name": "BaseBdev1", 00:30:57.688 "aliases": [ 00:30:57.688 "475240c0-97bf-47df-8709-89d3c74291ff" 00:30:57.688 ], 00:30:57.688 "product_name": "Malloc disk", 00:30:57.688 "block_size": 512, 00:30:57.688 "num_blocks": 65536, 00:30:57.688 "uuid": "475240c0-97bf-47df-8709-89d3c74291ff", 00:30:57.688 "assigned_rate_limits": { 00:30:57.688 "rw_ios_per_sec": 0, 00:30:57.688 "rw_mbytes_per_sec": 0, 00:30:57.688 "r_mbytes_per_sec": 0, 00:30:57.688 "w_mbytes_per_sec": 0 00:30:57.688 }, 00:30:57.688 "claimed": true, 00:30:57.688 "claim_type": "exclusive_write", 00:30:57.688 "zoned": false, 00:30:57.688 "supported_io_types": { 00:30:57.688 "read": true, 00:30:57.688 "write": true, 00:30:57.688 "unmap": true, 00:30:57.688 "flush": true, 00:30:57.688 "reset": true, 00:30:57.688 "nvme_admin": false, 00:30:57.688 "nvme_io": false, 00:30:57.688 "nvme_io_md": false, 00:30:57.688 "write_zeroes": true, 00:30:57.688 "zcopy": true, 00:30:57.688 "get_zone_info": false, 00:30:57.688 "zone_management": false, 00:30:57.688 "zone_append": false, 00:30:57.688 "compare": false, 00:30:57.688 "compare_and_write": false, 00:30:57.688 "abort": true, 00:30:57.688 "seek_hole": false, 00:30:57.688 "seek_data": false, 00:30:57.688 "copy": true, 00:30:57.688 "nvme_iov_md": false 00:30:57.688 }, 00:30:57.688 "memory_domains": [ 00:30:57.688 { 00:30:57.688 "dma_device_id": "system", 00:30:57.688 "dma_device_type": 1 00:30:57.688 }, 00:30:57.688 { 00:30:57.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.688 "dma_device_type": 2 00:30:57.688 } 00:30:57.688 ], 00:30:57.688 "driver_specific": {} 00:30:57.688 } 00:30:57.688 ] 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.688 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.255 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.255 "name": "Existed_Raid", 00:30:58.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.255 "strip_size_kb": 64, 00:30:58.255 "state": "configuring", 00:30:58.255 "raid_level": "raid5f", 00:30:58.255 "superblock": false, 00:30:58.255 "num_base_bdevs": 3, 00:30:58.255 "num_base_bdevs_discovered": 1, 00:30:58.255 "num_base_bdevs_operational": 3, 00:30:58.255 "base_bdevs_list": [ 00:30:58.255 { 00:30:58.255 "name": "BaseBdev1", 00:30:58.255 "uuid": "475240c0-97bf-47df-8709-89d3c74291ff", 00:30:58.255 "is_configured": true, 00:30:58.255 "data_offset": 0, 00:30:58.255 "data_size": 65536 00:30:58.255 }, 00:30:58.255 { 00:30:58.255 "name": "BaseBdev2", 00:30:58.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.255 "is_configured": false, 00:30:58.255 "data_offset": 0, 00:30:58.255 "data_size": 0 00:30:58.255 }, 00:30:58.255 { 00:30:58.255 "name": "BaseBdev3", 00:30:58.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.255 "is_configured": false, 00:30:58.255 "data_offset": 0, 00:30:58.255 "data_size": 0 00:30:58.255 } 00:30:58.255 ] 00:30:58.255 }' 00:30:58.255 18:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.255 18:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.822 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:58.822 [2024-07-25 18:57:59.254602] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:58.822 [2024-07-25 18:57:59.254758] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:30:58.822 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:59.080 [2024-07-25 18:57:59.522708] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:59.080 [2024-07-25 18:57:59.525083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:59.080 [2024-07-25 18:57:59.525269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:59.080 [2024-07-25 18:57:59.525367] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:59.080 [2024-07-25 18:57:59.525443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.080 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:59.346 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.347 "name": "Existed_Raid", 00:30:59.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.347 "strip_size_kb": 64, 00:30:59.347 "state": "configuring", 00:30:59.347 "raid_level": "raid5f", 00:30:59.347 "superblock": false, 00:30:59.347 "num_base_bdevs": 3, 00:30:59.347 "num_base_bdevs_discovered": 1, 00:30:59.347 "num_base_bdevs_operational": 3, 00:30:59.347 "base_bdevs_list": [ 00:30:59.347 { 00:30:59.347 "name": "BaseBdev1", 00:30:59.347 "uuid": "475240c0-97bf-47df-8709-89d3c74291ff", 00:30:59.347 "is_configured": true, 00:30:59.347 "data_offset": 0, 00:30:59.347 "data_size": 65536 00:30:59.347 }, 00:30:59.347 { 00:30:59.347 "name": "BaseBdev2", 00:30:59.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.347 "is_configured": false, 00:30:59.347 "data_offset": 0, 00:30:59.347 "data_size": 0 00:30:59.347 }, 00:30:59.347 { 00:30:59.347 "name": "BaseBdev3", 00:30:59.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.347 "is_configured": false, 00:30:59.347 "data_offset": 0, 00:30:59.347 "data_size": 0 00:30:59.347 } 00:30:59.347 ] 00:30:59.347 }' 00:30:59.347 18:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.347 18:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.914 18:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:00.172 [2024-07-25 18:58:00.591801] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:00.172 BaseBdev2 00:31:00.172 18:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:00.172 18:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:00.172 18:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:00.172 18:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:00.172 18:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:00.172 18:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:00.172 18:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:00.430 18:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:00.431 [ 00:31:00.431 { 00:31:00.431 "name": "BaseBdev2", 00:31:00.431 "aliases": [ 00:31:00.431 "e3dbce24-90cf-486f-a662-7f397c17c859" 00:31:00.431 ], 00:31:00.431 "product_name": "Malloc disk", 00:31:00.431 "block_size": 512, 00:31:00.431 "num_blocks": 65536, 00:31:00.431 "uuid": "e3dbce24-90cf-486f-a662-7f397c17c859", 00:31:00.431 "assigned_rate_limits": { 00:31:00.431 "rw_ios_per_sec": 0, 00:31:00.431 "rw_mbytes_per_sec": 0, 00:31:00.431 "r_mbytes_per_sec": 0, 00:31:00.431 "w_mbytes_per_sec": 0 00:31:00.431 }, 00:31:00.431 "claimed": true, 00:31:00.431 "claim_type": "exclusive_write", 00:31:00.431 "zoned": false, 00:31:00.431 "supported_io_types": { 00:31:00.431 "read": true, 00:31:00.431 "write": true, 00:31:00.431 "unmap": true, 00:31:00.431 "flush": true, 00:31:00.431 "reset": true, 00:31:00.431 "nvme_admin": false, 00:31:00.431 "nvme_io": false, 00:31:00.431 "nvme_io_md": false, 00:31:00.431 "write_zeroes": true, 00:31:00.431 "zcopy": true, 00:31:00.431 "get_zone_info": false, 00:31:00.431 "zone_management": false, 00:31:00.431 "zone_append": false, 00:31:00.431 "compare": false, 00:31:00.431 "compare_and_write": false, 00:31:00.431 "abort": true, 00:31:00.431 "seek_hole": false, 00:31:00.431 "seek_data": false, 00:31:00.431 "copy": true, 00:31:00.431 "nvme_iov_md": false 00:31:00.431 }, 00:31:00.431 "memory_domains": [ 00:31:00.431 { 00:31:00.431 "dma_device_id": "system", 00:31:00.431 "dma_device_type": 1 00:31:00.431 }, 00:31:00.431 { 00:31:00.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:00.431 "dma_device_type": 2 00:31:00.431 } 00:31:00.431 ], 00:31:00.431 "driver_specific": {} 00:31:00.431 } 00:31:00.431 ] 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:00.688 "name": "Existed_Raid", 00:31:00.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.688 "strip_size_kb": 64, 00:31:00.688 "state": "configuring", 00:31:00.688 "raid_level": "raid5f", 00:31:00.688 "superblock": false, 00:31:00.688 "num_base_bdevs": 3, 00:31:00.688 "num_base_bdevs_discovered": 2, 00:31:00.688 "num_base_bdevs_operational": 3, 00:31:00.688 "base_bdevs_list": [ 00:31:00.688 { 00:31:00.688 "name": "BaseBdev1", 00:31:00.688 "uuid": "475240c0-97bf-47df-8709-89d3c74291ff", 00:31:00.688 "is_configured": true, 00:31:00.688 "data_offset": 0, 00:31:00.688 "data_size": 65536 00:31:00.688 }, 00:31:00.688 { 00:31:00.688 "name": "BaseBdev2", 00:31:00.688 "uuid": "e3dbce24-90cf-486f-a662-7f397c17c859", 00:31:00.688 "is_configured": true, 00:31:00.688 "data_offset": 0, 00:31:00.688 "data_size": 65536 00:31:00.688 }, 00:31:00.688 { 00:31:00.688 "name": "BaseBdev3", 00:31:00.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.688 "is_configured": false, 00:31:00.688 "data_offset": 0, 00:31:00.688 "data_size": 0 00:31:00.688 } 00:31:00.688 ] 00:31:00.688 }' 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:00.688 18:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.253 18:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:01.511 [2024-07-25 18:58:02.042535] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:01.511 [2024-07-25 18:58:02.042774] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:31:01.511 [2024-07-25 18:58:02.042820] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:01.511 [2024-07-25 18:58:02.042999] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:31:01.511 [2024-07-25 18:58:02.047417] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:31:01.511 [2024-07-25 18:58:02.047563] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:31:01.511 [2024-07-25 18:58:02.048008] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.511 BaseBdev3 00:31:01.511 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:31:01.511 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:01.511 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:01.511 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:01.511 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:01.511 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:01.511 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:01.769 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:02.028 [ 00:31:02.028 { 00:31:02.028 "name": "BaseBdev3", 00:31:02.028 "aliases": [ 00:31:02.028 "233db761-c64d-4ac4-8a0f-d91a589ec35e" 00:31:02.028 ], 00:31:02.028 "product_name": "Malloc disk", 00:31:02.028 "block_size": 512, 00:31:02.028 "num_blocks": 65536, 00:31:02.028 "uuid": "233db761-c64d-4ac4-8a0f-d91a589ec35e", 00:31:02.028 "assigned_rate_limits": { 00:31:02.028 "rw_ios_per_sec": 0, 00:31:02.028 "rw_mbytes_per_sec": 0, 00:31:02.028 "r_mbytes_per_sec": 0, 00:31:02.028 "w_mbytes_per_sec": 0 00:31:02.028 }, 00:31:02.028 "claimed": true, 00:31:02.028 "claim_type": "exclusive_write", 00:31:02.028 "zoned": false, 00:31:02.028 "supported_io_types": { 00:31:02.028 "read": true, 00:31:02.028 "write": true, 00:31:02.028 "unmap": true, 00:31:02.028 "flush": true, 00:31:02.028 "reset": true, 00:31:02.028 "nvme_admin": false, 00:31:02.028 "nvme_io": false, 00:31:02.028 "nvme_io_md": false, 00:31:02.028 "write_zeroes": true, 00:31:02.028 "zcopy": true, 00:31:02.028 "get_zone_info": false, 00:31:02.028 "zone_management": false, 00:31:02.028 "zone_append": false, 00:31:02.028 "compare": false, 00:31:02.028 "compare_and_write": false, 00:31:02.028 "abort": true, 00:31:02.028 "seek_hole": false, 00:31:02.028 "seek_data": false, 00:31:02.028 "copy": true, 00:31:02.028 "nvme_iov_md": false 00:31:02.028 }, 00:31:02.028 "memory_domains": [ 00:31:02.028 { 00:31:02.028 "dma_device_id": "system", 00:31:02.028 "dma_device_type": 1 00:31:02.028 }, 00:31:02.028 { 00:31:02.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:02.028 "dma_device_type": 2 00:31:02.028 } 00:31:02.028 ], 00:31:02.028 "driver_specific": {} 00:31:02.028 } 00:31:02.028 ] 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.028 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:02.286 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:02.286 "name": "Existed_Raid", 00:31:02.286 "uuid": "9595ca7d-c2bf-496d-b518-f2fb904dc399", 00:31:02.286 "strip_size_kb": 64, 00:31:02.286 "state": "online", 00:31:02.286 "raid_level": "raid5f", 00:31:02.286 "superblock": false, 00:31:02.286 "num_base_bdevs": 3, 00:31:02.286 "num_base_bdevs_discovered": 3, 00:31:02.286 "num_base_bdevs_operational": 3, 00:31:02.286 "base_bdevs_list": [ 00:31:02.286 { 00:31:02.286 "name": "BaseBdev1", 00:31:02.286 "uuid": "475240c0-97bf-47df-8709-89d3c74291ff", 00:31:02.286 "is_configured": true, 00:31:02.286 "data_offset": 0, 00:31:02.286 "data_size": 65536 00:31:02.286 }, 00:31:02.286 { 00:31:02.286 "name": "BaseBdev2", 00:31:02.286 "uuid": "e3dbce24-90cf-486f-a662-7f397c17c859", 00:31:02.286 "is_configured": true, 00:31:02.286 "data_offset": 0, 00:31:02.286 "data_size": 65536 00:31:02.286 }, 00:31:02.286 { 00:31:02.286 "name": "BaseBdev3", 00:31:02.286 "uuid": "233db761-c64d-4ac4-8a0f-d91a589ec35e", 00:31:02.286 "is_configured": true, 00:31:02.286 "data_offset": 0, 00:31:02.286 "data_size": 65536 00:31:02.286 } 00:31:02.286 ] 00:31:02.286 }' 00:31:02.286 18:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:02.286 18:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:02.852 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:03.109 [2024-07-25 18:58:03.486802] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:03.109 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:03.109 "name": "Existed_Raid", 00:31:03.109 "aliases": [ 00:31:03.109 "9595ca7d-c2bf-496d-b518-f2fb904dc399" 00:31:03.109 ], 00:31:03.109 "product_name": "Raid Volume", 00:31:03.109 "block_size": 512, 00:31:03.109 "num_blocks": 131072, 00:31:03.109 "uuid": "9595ca7d-c2bf-496d-b518-f2fb904dc399", 00:31:03.109 "assigned_rate_limits": { 00:31:03.109 "rw_ios_per_sec": 0, 00:31:03.109 "rw_mbytes_per_sec": 0, 00:31:03.109 "r_mbytes_per_sec": 0, 00:31:03.109 "w_mbytes_per_sec": 0 00:31:03.109 }, 00:31:03.109 "claimed": false, 00:31:03.109 "zoned": false, 00:31:03.109 "supported_io_types": { 00:31:03.109 "read": true, 00:31:03.109 "write": true, 00:31:03.109 "unmap": false, 00:31:03.109 "flush": false, 00:31:03.109 "reset": true, 00:31:03.109 "nvme_admin": false, 00:31:03.109 "nvme_io": false, 00:31:03.109 "nvme_io_md": false, 00:31:03.109 "write_zeroes": true, 00:31:03.109 "zcopy": false, 00:31:03.109 "get_zone_info": false, 00:31:03.109 "zone_management": false, 00:31:03.109 "zone_append": false, 00:31:03.109 "compare": false, 00:31:03.109 "compare_and_write": false, 00:31:03.109 "abort": false, 00:31:03.109 "seek_hole": false, 00:31:03.109 "seek_data": false, 00:31:03.109 "copy": false, 00:31:03.109 "nvme_iov_md": false 00:31:03.109 }, 00:31:03.109 "driver_specific": { 00:31:03.109 "raid": { 00:31:03.109 "uuid": "9595ca7d-c2bf-496d-b518-f2fb904dc399", 00:31:03.109 "strip_size_kb": 64, 00:31:03.109 "state": "online", 00:31:03.109 "raid_level": "raid5f", 00:31:03.109 "superblock": false, 00:31:03.109 "num_base_bdevs": 3, 00:31:03.109 "num_base_bdevs_discovered": 3, 00:31:03.109 "num_base_bdevs_operational": 3, 00:31:03.109 "base_bdevs_list": [ 00:31:03.109 { 00:31:03.109 "name": "BaseBdev1", 00:31:03.109 "uuid": "475240c0-97bf-47df-8709-89d3c74291ff", 00:31:03.109 "is_configured": true, 00:31:03.109 "data_offset": 0, 00:31:03.109 "data_size": 65536 00:31:03.109 }, 00:31:03.109 { 00:31:03.109 "name": "BaseBdev2", 00:31:03.109 "uuid": "e3dbce24-90cf-486f-a662-7f397c17c859", 00:31:03.109 "is_configured": true, 00:31:03.109 "data_offset": 0, 00:31:03.109 "data_size": 65536 00:31:03.109 }, 00:31:03.109 { 00:31:03.109 "name": "BaseBdev3", 00:31:03.109 "uuid": "233db761-c64d-4ac4-8a0f-d91a589ec35e", 00:31:03.109 "is_configured": true, 00:31:03.109 "data_offset": 0, 00:31:03.109 "data_size": 65536 00:31:03.109 } 00:31:03.109 ] 00:31:03.109 } 00:31:03.109 } 00:31:03.109 }' 00:31:03.109 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:03.109 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:03.109 BaseBdev2 00:31:03.109 BaseBdev3' 00:31:03.109 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:03.109 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:03.109 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:03.366 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:03.366 "name": "BaseBdev1", 00:31:03.366 "aliases": [ 00:31:03.366 "475240c0-97bf-47df-8709-89d3c74291ff" 00:31:03.366 ], 00:31:03.366 "product_name": "Malloc disk", 00:31:03.366 "block_size": 512, 00:31:03.366 "num_blocks": 65536, 00:31:03.366 "uuid": "475240c0-97bf-47df-8709-89d3c74291ff", 00:31:03.366 "assigned_rate_limits": { 00:31:03.366 "rw_ios_per_sec": 0, 00:31:03.366 "rw_mbytes_per_sec": 0, 00:31:03.366 "r_mbytes_per_sec": 0, 00:31:03.366 "w_mbytes_per_sec": 0 00:31:03.366 }, 00:31:03.366 "claimed": true, 00:31:03.366 "claim_type": "exclusive_write", 00:31:03.366 "zoned": false, 00:31:03.366 "supported_io_types": { 00:31:03.366 "read": true, 00:31:03.366 "write": true, 00:31:03.366 "unmap": true, 00:31:03.366 "flush": true, 00:31:03.366 "reset": true, 00:31:03.366 "nvme_admin": false, 00:31:03.366 "nvme_io": false, 00:31:03.366 "nvme_io_md": false, 00:31:03.366 "write_zeroes": true, 00:31:03.366 "zcopy": true, 00:31:03.366 "get_zone_info": false, 00:31:03.366 "zone_management": false, 00:31:03.366 "zone_append": false, 00:31:03.366 "compare": false, 00:31:03.366 "compare_and_write": false, 00:31:03.366 "abort": true, 00:31:03.366 "seek_hole": false, 00:31:03.366 "seek_data": false, 00:31:03.366 "copy": true, 00:31:03.366 "nvme_iov_md": false 00:31:03.366 }, 00:31:03.366 "memory_domains": [ 00:31:03.366 { 00:31:03.366 "dma_device_id": "system", 00:31:03.366 "dma_device_type": 1 00:31:03.366 }, 00:31:03.366 { 00:31:03.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:03.366 "dma_device_type": 2 00:31:03.366 } 00:31:03.366 ], 00:31:03.366 "driver_specific": {} 00:31:03.366 }' 00:31:03.366 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:03.366 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:03.366 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:03.366 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:03.366 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:03.623 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:03.623 18:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:03.623 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:03.881 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:03.881 "name": "BaseBdev2", 00:31:03.881 "aliases": [ 00:31:03.881 "e3dbce24-90cf-486f-a662-7f397c17c859" 00:31:03.881 ], 00:31:03.881 "product_name": "Malloc disk", 00:31:03.881 "block_size": 512, 00:31:03.881 "num_blocks": 65536, 00:31:03.881 "uuid": "e3dbce24-90cf-486f-a662-7f397c17c859", 00:31:03.881 "assigned_rate_limits": { 00:31:03.881 "rw_ios_per_sec": 0, 00:31:03.881 "rw_mbytes_per_sec": 0, 00:31:03.881 "r_mbytes_per_sec": 0, 00:31:03.881 "w_mbytes_per_sec": 0 00:31:03.881 }, 00:31:03.881 "claimed": true, 00:31:03.881 "claim_type": "exclusive_write", 00:31:03.881 "zoned": false, 00:31:03.881 "supported_io_types": { 00:31:03.881 "read": true, 00:31:03.881 "write": true, 00:31:03.881 "unmap": true, 00:31:03.881 "flush": true, 00:31:03.881 "reset": true, 00:31:03.881 "nvme_admin": false, 00:31:03.881 "nvme_io": false, 00:31:03.881 "nvme_io_md": false, 00:31:03.881 "write_zeroes": true, 00:31:03.881 "zcopy": true, 00:31:03.881 "get_zone_info": false, 00:31:03.881 "zone_management": false, 00:31:03.881 "zone_append": false, 00:31:03.881 "compare": false, 00:31:03.881 "compare_and_write": false, 00:31:03.881 "abort": true, 00:31:03.881 "seek_hole": false, 00:31:03.881 "seek_data": false, 00:31:03.881 "copy": true, 00:31:03.881 "nvme_iov_md": false 00:31:03.881 }, 00:31:03.881 "memory_domains": [ 00:31:03.881 { 00:31:03.881 "dma_device_id": "system", 00:31:03.881 "dma_device_type": 1 00:31:03.881 }, 00:31:03.881 { 00:31:03.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:03.881 "dma_device_type": 2 00:31:03.881 } 00:31:03.881 ], 00:31:03.881 "driver_specific": {} 00:31:03.881 }' 00:31:03.882 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:03.882 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.139 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.397 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:04.397 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:04.397 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:04.397 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:04.655 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:04.655 "name": "BaseBdev3", 00:31:04.655 "aliases": [ 00:31:04.655 "233db761-c64d-4ac4-8a0f-d91a589ec35e" 00:31:04.655 ], 00:31:04.655 "product_name": "Malloc disk", 00:31:04.655 "block_size": 512, 00:31:04.655 "num_blocks": 65536, 00:31:04.655 "uuid": "233db761-c64d-4ac4-8a0f-d91a589ec35e", 00:31:04.655 "assigned_rate_limits": { 00:31:04.655 "rw_ios_per_sec": 0, 00:31:04.655 "rw_mbytes_per_sec": 0, 00:31:04.655 "r_mbytes_per_sec": 0, 00:31:04.655 "w_mbytes_per_sec": 0 00:31:04.655 }, 00:31:04.655 "claimed": true, 00:31:04.655 "claim_type": "exclusive_write", 00:31:04.655 "zoned": false, 00:31:04.655 "supported_io_types": { 00:31:04.655 "read": true, 00:31:04.655 "write": true, 00:31:04.655 "unmap": true, 00:31:04.655 "flush": true, 00:31:04.655 "reset": true, 00:31:04.655 "nvme_admin": false, 00:31:04.655 "nvme_io": false, 00:31:04.655 "nvme_io_md": false, 00:31:04.655 "write_zeroes": true, 00:31:04.655 "zcopy": true, 00:31:04.655 "get_zone_info": false, 00:31:04.655 "zone_management": false, 00:31:04.655 "zone_append": false, 00:31:04.655 "compare": false, 00:31:04.655 "compare_and_write": false, 00:31:04.655 "abort": true, 00:31:04.655 "seek_hole": false, 00:31:04.655 "seek_data": false, 00:31:04.655 "copy": true, 00:31:04.655 "nvme_iov_md": false 00:31:04.655 }, 00:31:04.655 "memory_domains": [ 00:31:04.655 { 00:31:04.655 "dma_device_id": "system", 00:31:04.655 "dma_device_type": 1 00:31:04.655 }, 00:31:04.655 { 00:31:04.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.655 "dma_device_type": 2 00:31:04.655 } 00:31:04.655 ], 00:31:04.655 "driver_specific": {} 00:31:04.655 }' 00:31:04.655 18:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:04.655 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.913 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.913 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:04.913 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:04.913 [2024-07-25 18:58:05.467060] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:05.172 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.430 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:05.430 "name": "Existed_Raid", 00:31:05.430 "uuid": "9595ca7d-c2bf-496d-b518-f2fb904dc399", 00:31:05.430 "strip_size_kb": 64, 00:31:05.430 "state": "online", 00:31:05.430 "raid_level": "raid5f", 00:31:05.430 "superblock": false, 00:31:05.430 "num_base_bdevs": 3, 00:31:05.430 "num_base_bdevs_discovered": 2, 00:31:05.430 "num_base_bdevs_operational": 2, 00:31:05.430 "base_bdevs_list": [ 00:31:05.430 { 00:31:05.430 "name": null, 00:31:05.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.430 "is_configured": false, 00:31:05.430 "data_offset": 0, 00:31:05.430 "data_size": 65536 00:31:05.430 }, 00:31:05.430 { 00:31:05.430 "name": "BaseBdev2", 00:31:05.430 "uuid": "e3dbce24-90cf-486f-a662-7f397c17c859", 00:31:05.430 "is_configured": true, 00:31:05.430 "data_offset": 0, 00:31:05.430 "data_size": 65536 00:31:05.430 }, 00:31:05.430 { 00:31:05.430 "name": "BaseBdev3", 00:31:05.430 "uuid": "233db761-c64d-4ac4-8a0f-d91a589ec35e", 00:31:05.430 "is_configured": true, 00:31:05.430 "data_offset": 0, 00:31:05.430 "data_size": 65536 00:31:05.430 } 00:31:05.430 ] 00:31:05.430 }' 00:31:05.430 18:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:05.430 18:58:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.996 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:05.996 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:05.996 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.996 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:06.255 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:06.255 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:06.255 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:06.255 [2024-07-25 18:58:06.829426] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:06.255 [2024-07-25 18:58:06.829688] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:06.513 [2024-07-25 18:58:06.912842] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.514 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:06.514 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:06.514 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.514 18:58:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:06.772 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:06.772 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:06.772 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:31:06.772 [2024-07-25 18:58:07.262457] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:06.772 [2024-07-25 18:58:07.262998] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:31:07.029 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:07.029 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:07.029 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.029 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:07.288 BaseBdev2 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:07.288 18:58:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:07.547 18:58:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:07.806 [ 00:31:07.806 { 00:31:07.806 "name": "BaseBdev2", 00:31:07.806 "aliases": [ 00:31:07.806 "5eefab51-2b97-4b04-8ec0-5c86d47ae903" 00:31:07.806 ], 00:31:07.806 "product_name": "Malloc disk", 00:31:07.806 "block_size": 512, 00:31:07.806 "num_blocks": 65536, 00:31:07.806 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:07.806 "assigned_rate_limits": { 00:31:07.806 "rw_ios_per_sec": 0, 00:31:07.806 "rw_mbytes_per_sec": 0, 00:31:07.806 "r_mbytes_per_sec": 0, 00:31:07.806 "w_mbytes_per_sec": 0 00:31:07.806 }, 00:31:07.806 "claimed": false, 00:31:07.806 "zoned": false, 00:31:07.806 "supported_io_types": { 00:31:07.806 "read": true, 00:31:07.806 "write": true, 00:31:07.806 "unmap": true, 00:31:07.806 "flush": true, 00:31:07.806 "reset": true, 00:31:07.806 "nvme_admin": false, 00:31:07.806 "nvme_io": false, 00:31:07.806 "nvme_io_md": false, 00:31:07.806 "write_zeroes": true, 00:31:07.806 "zcopy": true, 00:31:07.806 "get_zone_info": false, 00:31:07.806 "zone_management": false, 00:31:07.806 "zone_append": false, 00:31:07.806 "compare": false, 00:31:07.806 "compare_and_write": false, 00:31:07.806 "abort": true, 00:31:07.806 "seek_hole": false, 00:31:07.806 "seek_data": false, 00:31:07.806 "copy": true, 00:31:07.806 "nvme_iov_md": false 00:31:07.806 }, 00:31:07.806 "memory_domains": [ 00:31:07.806 { 00:31:07.806 "dma_device_id": "system", 00:31:07.806 "dma_device_type": 1 00:31:07.806 }, 00:31:07.806 { 00:31:07.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:07.806 "dma_device_type": 2 00:31:07.806 } 00:31:07.806 ], 00:31:07.806 "driver_specific": {} 00:31:07.806 } 00:31:07.806 ] 00:31:07.806 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:07.806 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:07.806 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:07.806 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:07.806 BaseBdev3 00:31:08.065 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:31:08.065 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:08.065 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:08.065 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:08.066 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:08.066 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:08.066 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:08.066 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:08.324 [ 00:31:08.324 { 00:31:08.324 "name": "BaseBdev3", 00:31:08.324 "aliases": [ 00:31:08.324 "a7815d42-5634-456d-b397-7eb22f9fd1f8" 00:31:08.324 ], 00:31:08.324 "product_name": "Malloc disk", 00:31:08.324 "block_size": 512, 00:31:08.324 "num_blocks": 65536, 00:31:08.324 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:08.324 "assigned_rate_limits": { 00:31:08.324 "rw_ios_per_sec": 0, 00:31:08.324 "rw_mbytes_per_sec": 0, 00:31:08.324 "r_mbytes_per_sec": 0, 00:31:08.324 "w_mbytes_per_sec": 0 00:31:08.324 }, 00:31:08.324 "claimed": false, 00:31:08.324 "zoned": false, 00:31:08.324 "supported_io_types": { 00:31:08.324 "read": true, 00:31:08.324 "write": true, 00:31:08.324 "unmap": true, 00:31:08.324 "flush": true, 00:31:08.324 "reset": true, 00:31:08.324 "nvme_admin": false, 00:31:08.324 "nvme_io": false, 00:31:08.324 "nvme_io_md": false, 00:31:08.324 "write_zeroes": true, 00:31:08.324 "zcopy": true, 00:31:08.324 "get_zone_info": false, 00:31:08.325 "zone_management": false, 00:31:08.325 "zone_append": false, 00:31:08.325 "compare": false, 00:31:08.325 "compare_and_write": false, 00:31:08.325 "abort": true, 00:31:08.325 "seek_hole": false, 00:31:08.325 "seek_data": false, 00:31:08.325 "copy": true, 00:31:08.325 "nvme_iov_md": false 00:31:08.325 }, 00:31:08.325 "memory_domains": [ 00:31:08.325 { 00:31:08.325 "dma_device_id": "system", 00:31:08.325 "dma_device_type": 1 00:31:08.325 }, 00:31:08.325 { 00:31:08.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:08.325 "dma_device_type": 2 00:31:08.325 } 00:31:08.325 ], 00:31:08.325 "driver_specific": {} 00:31:08.325 } 00:31:08.325 ] 00:31:08.325 18:58:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:08.325 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:08.325 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:08.325 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:08.584 [2024-07-25 18:58:08.918199] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:08.584 [2024-07-25 18:58:08.918427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:08.584 [2024-07-25 18:58:08.918609] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:08.584 [2024-07-25 18:58:08.920864] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.584 18:58:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:08.584 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:08.584 "name": "Existed_Raid", 00:31:08.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.584 "strip_size_kb": 64, 00:31:08.584 "state": "configuring", 00:31:08.584 "raid_level": "raid5f", 00:31:08.584 "superblock": false, 00:31:08.584 "num_base_bdevs": 3, 00:31:08.584 "num_base_bdevs_discovered": 2, 00:31:08.584 "num_base_bdevs_operational": 3, 00:31:08.584 "base_bdevs_list": [ 00:31:08.584 { 00:31:08.584 "name": "BaseBdev1", 00:31:08.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.584 "is_configured": false, 00:31:08.584 "data_offset": 0, 00:31:08.584 "data_size": 0 00:31:08.584 }, 00:31:08.584 { 00:31:08.584 "name": "BaseBdev2", 00:31:08.584 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:08.584 "is_configured": true, 00:31:08.584 "data_offset": 0, 00:31:08.584 "data_size": 65536 00:31:08.584 }, 00:31:08.584 { 00:31:08.584 "name": "BaseBdev3", 00:31:08.584 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:08.584 "is_configured": true, 00:31:08.584 "data_offset": 0, 00:31:08.584 "data_size": 65536 00:31:08.584 } 00:31:08.584 ] 00:31:08.584 }' 00:31:08.584 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:08.584 18:58:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.153 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:09.412 [2024-07-25 18:58:09.894367] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.412 18:58:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.671 18:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:09.671 "name": "Existed_Raid", 00:31:09.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.671 "strip_size_kb": 64, 00:31:09.671 "state": "configuring", 00:31:09.671 "raid_level": "raid5f", 00:31:09.671 "superblock": false, 00:31:09.671 "num_base_bdevs": 3, 00:31:09.671 "num_base_bdevs_discovered": 1, 00:31:09.671 "num_base_bdevs_operational": 3, 00:31:09.671 "base_bdevs_list": [ 00:31:09.671 { 00:31:09.671 "name": "BaseBdev1", 00:31:09.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.671 "is_configured": false, 00:31:09.671 "data_offset": 0, 00:31:09.671 "data_size": 0 00:31:09.671 }, 00:31:09.671 { 00:31:09.671 "name": null, 00:31:09.671 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:09.671 "is_configured": false, 00:31:09.671 "data_offset": 0, 00:31:09.671 "data_size": 65536 00:31:09.671 }, 00:31:09.671 { 00:31:09.671 "name": "BaseBdev3", 00:31:09.671 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:09.671 "is_configured": true, 00:31:09.671 "data_offset": 0, 00:31:09.671 "data_size": 65536 00:31:09.671 } 00:31:09.671 ] 00:31:09.671 }' 00:31:09.671 18:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:09.671 18:58:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.239 18:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.239 18:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:10.498 18:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:31:10.498 18:58:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:10.756 [2024-07-25 18:58:11.119362] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:10.756 BaseBdev1 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:10.756 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:11.016 [ 00:31:11.016 { 00:31:11.016 "name": "BaseBdev1", 00:31:11.016 "aliases": [ 00:31:11.016 "2b5d8476-306f-47b2-9437-b33fcd0589da" 00:31:11.016 ], 00:31:11.016 "product_name": "Malloc disk", 00:31:11.016 "block_size": 512, 00:31:11.016 "num_blocks": 65536, 00:31:11.016 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:11.016 "assigned_rate_limits": { 00:31:11.016 "rw_ios_per_sec": 0, 00:31:11.016 "rw_mbytes_per_sec": 0, 00:31:11.016 "r_mbytes_per_sec": 0, 00:31:11.016 "w_mbytes_per_sec": 0 00:31:11.016 }, 00:31:11.016 "claimed": true, 00:31:11.016 "claim_type": "exclusive_write", 00:31:11.016 "zoned": false, 00:31:11.016 "supported_io_types": { 00:31:11.016 "read": true, 00:31:11.016 "write": true, 00:31:11.016 "unmap": true, 00:31:11.016 "flush": true, 00:31:11.016 "reset": true, 00:31:11.016 "nvme_admin": false, 00:31:11.016 "nvme_io": false, 00:31:11.016 "nvme_io_md": false, 00:31:11.016 "write_zeroes": true, 00:31:11.016 "zcopy": true, 00:31:11.016 "get_zone_info": false, 00:31:11.016 "zone_management": false, 00:31:11.016 "zone_append": false, 00:31:11.016 "compare": false, 00:31:11.016 "compare_and_write": false, 00:31:11.016 "abort": true, 00:31:11.016 "seek_hole": false, 00:31:11.016 "seek_data": false, 00:31:11.016 "copy": true, 00:31:11.016 "nvme_iov_md": false 00:31:11.016 }, 00:31:11.016 "memory_domains": [ 00:31:11.016 { 00:31:11.016 "dma_device_id": "system", 00:31:11.016 "dma_device_type": 1 00:31:11.016 }, 00:31:11.016 { 00:31:11.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.016 "dma_device_type": 2 00:31:11.016 } 00:31:11.016 ], 00:31:11.016 "driver_specific": {} 00:31:11.016 } 00:31:11.016 ] 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.016 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:11.275 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.275 "name": "Existed_Raid", 00:31:11.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.275 "strip_size_kb": 64, 00:31:11.275 "state": "configuring", 00:31:11.275 "raid_level": "raid5f", 00:31:11.275 "superblock": false, 00:31:11.275 "num_base_bdevs": 3, 00:31:11.275 "num_base_bdevs_discovered": 2, 00:31:11.275 "num_base_bdevs_operational": 3, 00:31:11.275 "base_bdevs_list": [ 00:31:11.275 { 00:31:11.275 "name": "BaseBdev1", 00:31:11.275 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:11.275 "is_configured": true, 00:31:11.275 "data_offset": 0, 00:31:11.275 "data_size": 65536 00:31:11.275 }, 00:31:11.275 { 00:31:11.275 "name": null, 00:31:11.275 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:11.275 "is_configured": false, 00:31:11.275 "data_offset": 0, 00:31:11.275 "data_size": 65536 00:31:11.275 }, 00:31:11.275 { 00:31:11.275 "name": "BaseBdev3", 00:31:11.275 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:11.275 "is_configured": true, 00:31:11.275 "data_offset": 0, 00:31:11.275 "data_size": 65536 00:31:11.275 } 00:31:11.275 ] 00:31:11.275 }' 00:31:11.275 18:58:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.275 18:58:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.843 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.843 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:12.104 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:31:12.104 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:31:12.104 [2024-07-25 18:58:12.632354] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:12.104 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:12.104 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:12.104 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:12.105 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.394 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:12.394 "name": "Existed_Raid", 00:31:12.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.394 "strip_size_kb": 64, 00:31:12.394 "state": "configuring", 00:31:12.394 "raid_level": "raid5f", 00:31:12.394 "superblock": false, 00:31:12.394 "num_base_bdevs": 3, 00:31:12.394 "num_base_bdevs_discovered": 1, 00:31:12.394 "num_base_bdevs_operational": 3, 00:31:12.394 "base_bdevs_list": [ 00:31:12.394 { 00:31:12.394 "name": "BaseBdev1", 00:31:12.394 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:12.394 "is_configured": true, 00:31:12.394 "data_offset": 0, 00:31:12.394 "data_size": 65536 00:31:12.394 }, 00:31:12.394 { 00:31:12.394 "name": null, 00:31:12.394 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:12.394 "is_configured": false, 00:31:12.394 "data_offset": 0, 00:31:12.394 "data_size": 65536 00:31:12.394 }, 00:31:12.394 { 00:31:12.394 "name": null, 00:31:12.394 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:12.394 "is_configured": false, 00:31:12.394 "data_offset": 0, 00:31:12.394 "data_size": 65536 00:31:12.394 } 00:31:12.394 ] 00:31:12.394 }' 00:31:12.394 18:58:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:12.394 18:58:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.970 18:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:12.970 18:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.229 18:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:31:13.229 18:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:13.487 [2024-07-25 18:58:13.988616] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:13.487 18:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:13.487 18:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:13.487 18:58:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.487 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.746 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.746 "name": "Existed_Raid", 00:31:13.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.746 "strip_size_kb": 64, 00:31:13.746 "state": "configuring", 00:31:13.746 "raid_level": "raid5f", 00:31:13.746 "superblock": false, 00:31:13.746 "num_base_bdevs": 3, 00:31:13.746 "num_base_bdevs_discovered": 2, 00:31:13.746 "num_base_bdevs_operational": 3, 00:31:13.746 "base_bdevs_list": [ 00:31:13.746 { 00:31:13.746 "name": "BaseBdev1", 00:31:13.746 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:13.746 "is_configured": true, 00:31:13.746 "data_offset": 0, 00:31:13.746 "data_size": 65536 00:31:13.746 }, 00:31:13.746 { 00:31:13.746 "name": null, 00:31:13.746 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:13.746 "is_configured": false, 00:31:13.746 "data_offset": 0, 00:31:13.746 "data_size": 65536 00:31:13.746 }, 00:31:13.746 { 00:31:13.746 "name": "BaseBdev3", 00:31:13.746 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:13.746 "is_configured": true, 00:31:13.746 "data_offset": 0, 00:31:13.746 "data_size": 65536 00:31:13.746 } 00:31:13.746 ] 00:31:13.746 }' 00:31:13.746 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.746 18:58:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.314 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.314 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:14.572 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:31:14.572 18:58:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:14.572 [2024-07-25 18:58:15.108866] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.831 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:15.089 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:15.089 "name": "Existed_Raid", 00:31:15.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.089 "strip_size_kb": 64, 00:31:15.089 "state": "configuring", 00:31:15.089 "raid_level": "raid5f", 00:31:15.089 "superblock": false, 00:31:15.089 "num_base_bdevs": 3, 00:31:15.089 "num_base_bdevs_discovered": 1, 00:31:15.089 "num_base_bdevs_operational": 3, 00:31:15.089 "base_bdevs_list": [ 00:31:15.089 { 00:31:15.089 "name": null, 00:31:15.089 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:15.089 "is_configured": false, 00:31:15.089 "data_offset": 0, 00:31:15.089 "data_size": 65536 00:31:15.089 }, 00:31:15.089 { 00:31:15.089 "name": null, 00:31:15.089 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:15.089 "is_configured": false, 00:31:15.089 "data_offset": 0, 00:31:15.089 "data_size": 65536 00:31:15.089 }, 00:31:15.089 { 00:31:15.089 "name": "BaseBdev3", 00:31:15.089 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:15.089 "is_configured": true, 00:31:15.089 "data_offset": 0, 00:31:15.089 "data_size": 65536 00:31:15.089 } 00:31:15.089 ] 00:31:15.089 }' 00:31:15.089 18:58:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:15.089 18:58:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.657 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.657 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:15.916 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:31:15.916 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:16.175 [2024-07-25 18:58:16.533428] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.175 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.434 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.434 "name": "Existed_Raid", 00:31:16.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.434 "strip_size_kb": 64, 00:31:16.434 "state": "configuring", 00:31:16.434 "raid_level": "raid5f", 00:31:16.434 "superblock": false, 00:31:16.434 "num_base_bdevs": 3, 00:31:16.434 "num_base_bdevs_discovered": 2, 00:31:16.434 "num_base_bdevs_operational": 3, 00:31:16.434 "base_bdevs_list": [ 00:31:16.434 { 00:31:16.434 "name": null, 00:31:16.434 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:16.434 "is_configured": false, 00:31:16.434 "data_offset": 0, 00:31:16.434 "data_size": 65536 00:31:16.434 }, 00:31:16.434 { 00:31:16.434 "name": "BaseBdev2", 00:31:16.434 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:16.434 "is_configured": true, 00:31:16.434 "data_offset": 0, 00:31:16.434 "data_size": 65536 00:31:16.434 }, 00:31:16.434 { 00:31:16.434 "name": "BaseBdev3", 00:31:16.434 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:16.434 "is_configured": true, 00:31:16.434 "data_offset": 0, 00:31:16.434 "data_size": 65536 00:31:16.434 } 00:31:16.434 ] 00:31:16.434 }' 00:31:16.434 18:58:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.434 18:58:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.001 18:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:17.001 18:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.001 18:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:31:17.001 18:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.001 18:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:17.260 18:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2b5d8476-306f-47b2-9437-b33fcd0589da 00:31:17.519 [2024-07-25 18:58:17.910206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:17.519 [2024-07-25 18:58:17.910438] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:31:17.519 [2024-07-25 18:58:17.910479] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:17.519 [2024-07-25 18:58:17.910666] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:17.519 [2024-07-25 18:58:17.915737] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:31:17.519 [2024-07-25 18:58:17.915864] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:31:17.519 [2024-07-25 18:58:17.916163] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:17.519 NewBaseBdev 00:31:17.519 18:58:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:31:17.519 18:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:17.519 18:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:17.519 18:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:17.519 18:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:17.519 18:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:17.519 18:58:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:17.778 18:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:18.037 [ 00:31:18.037 { 00:31:18.037 "name": "NewBaseBdev", 00:31:18.037 "aliases": [ 00:31:18.037 "2b5d8476-306f-47b2-9437-b33fcd0589da" 00:31:18.037 ], 00:31:18.037 "product_name": "Malloc disk", 00:31:18.037 "block_size": 512, 00:31:18.037 "num_blocks": 65536, 00:31:18.037 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:18.037 "assigned_rate_limits": { 00:31:18.037 "rw_ios_per_sec": 0, 00:31:18.037 "rw_mbytes_per_sec": 0, 00:31:18.037 "r_mbytes_per_sec": 0, 00:31:18.037 "w_mbytes_per_sec": 0 00:31:18.037 }, 00:31:18.037 "claimed": true, 00:31:18.037 "claim_type": "exclusive_write", 00:31:18.037 "zoned": false, 00:31:18.037 "supported_io_types": { 00:31:18.037 "read": true, 00:31:18.037 "write": true, 00:31:18.037 "unmap": true, 00:31:18.037 "flush": true, 00:31:18.037 "reset": true, 00:31:18.037 "nvme_admin": false, 00:31:18.037 "nvme_io": false, 00:31:18.037 "nvme_io_md": false, 00:31:18.037 "write_zeroes": true, 00:31:18.037 "zcopy": true, 00:31:18.037 "get_zone_info": false, 00:31:18.037 "zone_management": false, 00:31:18.037 "zone_append": false, 00:31:18.037 "compare": false, 00:31:18.037 "compare_and_write": false, 00:31:18.037 "abort": true, 00:31:18.037 "seek_hole": false, 00:31:18.037 "seek_data": false, 00:31:18.037 "copy": true, 00:31:18.037 "nvme_iov_md": false 00:31:18.037 }, 00:31:18.037 "memory_domains": [ 00:31:18.037 { 00:31:18.037 "dma_device_id": "system", 00:31:18.037 "dma_device_type": 1 00:31:18.037 }, 00:31:18.037 { 00:31:18.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:18.037 "dma_device_type": 2 00:31:18.037 } 00:31:18.037 ], 00:31:18.037 "driver_specific": {} 00:31:18.037 } 00:31:18.037 ] 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.037 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:18.296 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:18.296 "name": "Existed_Raid", 00:31:18.296 "uuid": "44448d34-9d8e-49b2-91e1-c0ef05a3d82d", 00:31:18.296 "strip_size_kb": 64, 00:31:18.296 "state": "online", 00:31:18.296 "raid_level": "raid5f", 00:31:18.296 "superblock": false, 00:31:18.296 "num_base_bdevs": 3, 00:31:18.296 "num_base_bdevs_discovered": 3, 00:31:18.296 "num_base_bdevs_operational": 3, 00:31:18.296 "base_bdevs_list": [ 00:31:18.296 { 00:31:18.296 "name": "NewBaseBdev", 00:31:18.296 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:18.296 "is_configured": true, 00:31:18.296 "data_offset": 0, 00:31:18.296 "data_size": 65536 00:31:18.296 }, 00:31:18.296 { 00:31:18.296 "name": "BaseBdev2", 00:31:18.296 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:18.296 "is_configured": true, 00:31:18.296 "data_offset": 0, 00:31:18.296 "data_size": 65536 00:31:18.296 }, 00:31:18.296 { 00:31:18.296 "name": "BaseBdev3", 00:31:18.296 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:18.296 "is_configured": true, 00:31:18.296 "data_offset": 0, 00:31:18.296 "data_size": 65536 00:31:18.296 } 00:31:18.296 ] 00:31:18.296 }' 00:31:18.296 18:58:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:18.296 18:58:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:18.555 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:18.814 [2024-07-25 18:58:19.286393] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:18.814 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:18.814 "name": "Existed_Raid", 00:31:18.814 "aliases": [ 00:31:18.814 "44448d34-9d8e-49b2-91e1-c0ef05a3d82d" 00:31:18.814 ], 00:31:18.814 "product_name": "Raid Volume", 00:31:18.814 "block_size": 512, 00:31:18.814 "num_blocks": 131072, 00:31:18.814 "uuid": "44448d34-9d8e-49b2-91e1-c0ef05a3d82d", 00:31:18.814 "assigned_rate_limits": { 00:31:18.814 "rw_ios_per_sec": 0, 00:31:18.814 "rw_mbytes_per_sec": 0, 00:31:18.814 "r_mbytes_per_sec": 0, 00:31:18.814 "w_mbytes_per_sec": 0 00:31:18.814 }, 00:31:18.814 "claimed": false, 00:31:18.814 "zoned": false, 00:31:18.814 "supported_io_types": { 00:31:18.814 "read": true, 00:31:18.814 "write": true, 00:31:18.814 "unmap": false, 00:31:18.814 "flush": false, 00:31:18.814 "reset": true, 00:31:18.814 "nvme_admin": false, 00:31:18.814 "nvme_io": false, 00:31:18.814 "nvme_io_md": false, 00:31:18.814 "write_zeroes": true, 00:31:18.814 "zcopy": false, 00:31:18.814 "get_zone_info": false, 00:31:18.814 "zone_management": false, 00:31:18.814 "zone_append": false, 00:31:18.814 "compare": false, 00:31:18.814 "compare_and_write": false, 00:31:18.814 "abort": false, 00:31:18.814 "seek_hole": false, 00:31:18.814 "seek_data": false, 00:31:18.814 "copy": false, 00:31:18.814 "nvme_iov_md": false 00:31:18.814 }, 00:31:18.814 "driver_specific": { 00:31:18.814 "raid": { 00:31:18.814 "uuid": "44448d34-9d8e-49b2-91e1-c0ef05a3d82d", 00:31:18.814 "strip_size_kb": 64, 00:31:18.814 "state": "online", 00:31:18.814 "raid_level": "raid5f", 00:31:18.814 "superblock": false, 00:31:18.814 "num_base_bdevs": 3, 00:31:18.814 "num_base_bdevs_discovered": 3, 00:31:18.814 "num_base_bdevs_operational": 3, 00:31:18.814 "base_bdevs_list": [ 00:31:18.814 { 00:31:18.814 "name": "NewBaseBdev", 00:31:18.814 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:18.814 "is_configured": true, 00:31:18.814 "data_offset": 0, 00:31:18.814 "data_size": 65536 00:31:18.814 }, 00:31:18.814 { 00:31:18.814 "name": "BaseBdev2", 00:31:18.814 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:18.814 "is_configured": true, 00:31:18.814 "data_offset": 0, 00:31:18.814 "data_size": 65536 00:31:18.814 }, 00:31:18.814 { 00:31:18.814 "name": "BaseBdev3", 00:31:18.814 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:18.814 "is_configured": true, 00:31:18.814 "data_offset": 0, 00:31:18.814 "data_size": 65536 00:31:18.814 } 00:31:18.814 ] 00:31:18.814 } 00:31:18.814 } 00:31:18.814 }' 00:31:18.814 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:18.814 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:31:18.814 BaseBdev2 00:31:18.814 BaseBdev3' 00:31:18.814 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:18.814 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:31:18.814 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:19.073 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:19.073 "name": "NewBaseBdev", 00:31:19.073 "aliases": [ 00:31:19.073 "2b5d8476-306f-47b2-9437-b33fcd0589da" 00:31:19.073 ], 00:31:19.073 "product_name": "Malloc disk", 00:31:19.073 "block_size": 512, 00:31:19.073 "num_blocks": 65536, 00:31:19.073 "uuid": "2b5d8476-306f-47b2-9437-b33fcd0589da", 00:31:19.073 "assigned_rate_limits": { 00:31:19.073 "rw_ios_per_sec": 0, 00:31:19.073 "rw_mbytes_per_sec": 0, 00:31:19.073 "r_mbytes_per_sec": 0, 00:31:19.073 "w_mbytes_per_sec": 0 00:31:19.073 }, 00:31:19.073 "claimed": true, 00:31:19.073 "claim_type": "exclusive_write", 00:31:19.073 "zoned": false, 00:31:19.073 "supported_io_types": { 00:31:19.073 "read": true, 00:31:19.073 "write": true, 00:31:19.073 "unmap": true, 00:31:19.073 "flush": true, 00:31:19.074 "reset": true, 00:31:19.074 "nvme_admin": false, 00:31:19.074 "nvme_io": false, 00:31:19.074 "nvme_io_md": false, 00:31:19.074 "write_zeroes": true, 00:31:19.074 "zcopy": true, 00:31:19.074 "get_zone_info": false, 00:31:19.074 "zone_management": false, 00:31:19.074 "zone_append": false, 00:31:19.074 "compare": false, 00:31:19.074 "compare_and_write": false, 00:31:19.074 "abort": true, 00:31:19.074 "seek_hole": false, 00:31:19.074 "seek_data": false, 00:31:19.074 "copy": true, 00:31:19.074 "nvme_iov_md": false 00:31:19.074 }, 00:31:19.074 "memory_domains": [ 00:31:19.074 { 00:31:19.074 "dma_device_id": "system", 00:31:19.074 "dma_device_type": 1 00:31:19.074 }, 00:31:19.074 { 00:31:19.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:19.074 "dma_device_type": 2 00:31:19.074 } 00:31:19.074 ], 00:31:19.074 "driver_specific": {} 00:31:19.074 }' 00:31:19.074 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:19.332 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:19.591 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:19.592 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:19.592 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:19.592 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:19.592 18:58:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:19.592 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:19.592 "name": "BaseBdev2", 00:31:19.592 "aliases": [ 00:31:19.592 "5eefab51-2b97-4b04-8ec0-5c86d47ae903" 00:31:19.592 ], 00:31:19.592 "product_name": "Malloc disk", 00:31:19.592 "block_size": 512, 00:31:19.592 "num_blocks": 65536, 00:31:19.592 "uuid": "5eefab51-2b97-4b04-8ec0-5c86d47ae903", 00:31:19.592 "assigned_rate_limits": { 00:31:19.592 "rw_ios_per_sec": 0, 00:31:19.592 "rw_mbytes_per_sec": 0, 00:31:19.592 "r_mbytes_per_sec": 0, 00:31:19.592 "w_mbytes_per_sec": 0 00:31:19.592 }, 00:31:19.592 "claimed": true, 00:31:19.592 "claim_type": "exclusive_write", 00:31:19.592 "zoned": false, 00:31:19.592 "supported_io_types": { 00:31:19.592 "read": true, 00:31:19.592 "write": true, 00:31:19.592 "unmap": true, 00:31:19.592 "flush": true, 00:31:19.592 "reset": true, 00:31:19.592 "nvme_admin": false, 00:31:19.592 "nvme_io": false, 00:31:19.592 "nvme_io_md": false, 00:31:19.592 "write_zeroes": true, 00:31:19.592 "zcopy": true, 00:31:19.592 "get_zone_info": false, 00:31:19.592 "zone_management": false, 00:31:19.592 "zone_append": false, 00:31:19.592 "compare": false, 00:31:19.592 "compare_and_write": false, 00:31:19.592 "abort": true, 00:31:19.592 "seek_hole": false, 00:31:19.592 "seek_data": false, 00:31:19.592 "copy": true, 00:31:19.592 "nvme_iov_md": false 00:31:19.592 }, 00:31:19.592 "memory_domains": [ 00:31:19.592 { 00:31:19.592 "dma_device_id": "system", 00:31:19.592 "dma_device_type": 1 00:31:19.592 }, 00:31:19.592 { 00:31:19.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:19.592 "dma_device_type": 2 00:31:19.592 } 00:31:19.592 ], 00:31:19.592 "driver_specific": {} 00:31:19.592 }' 00:31:19.592 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:19.851 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.110 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.110 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:20.110 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:20.110 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:20.110 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:20.369 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:20.370 "name": "BaseBdev3", 00:31:20.370 "aliases": [ 00:31:20.370 "a7815d42-5634-456d-b397-7eb22f9fd1f8" 00:31:20.370 ], 00:31:20.370 "product_name": "Malloc disk", 00:31:20.370 "block_size": 512, 00:31:20.370 "num_blocks": 65536, 00:31:20.370 "uuid": "a7815d42-5634-456d-b397-7eb22f9fd1f8", 00:31:20.370 "assigned_rate_limits": { 00:31:20.370 "rw_ios_per_sec": 0, 00:31:20.370 "rw_mbytes_per_sec": 0, 00:31:20.370 "r_mbytes_per_sec": 0, 00:31:20.370 "w_mbytes_per_sec": 0 00:31:20.370 }, 00:31:20.370 "claimed": true, 00:31:20.370 "claim_type": "exclusive_write", 00:31:20.370 "zoned": false, 00:31:20.370 "supported_io_types": { 00:31:20.370 "read": true, 00:31:20.370 "write": true, 00:31:20.370 "unmap": true, 00:31:20.370 "flush": true, 00:31:20.370 "reset": true, 00:31:20.370 "nvme_admin": false, 00:31:20.370 "nvme_io": false, 00:31:20.370 "nvme_io_md": false, 00:31:20.370 "write_zeroes": true, 00:31:20.370 "zcopy": true, 00:31:20.370 "get_zone_info": false, 00:31:20.370 "zone_management": false, 00:31:20.370 "zone_append": false, 00:31:20.370 "compare": false, 00:31:20.370 "compare_and_write": false, 00:31:20.370 "abort": true, 00:31:20.370 "seek_hole": false, 00:31:20.370 "seek_data": false, 00:31:20.370 "copy": true, 00:31:20.370 "nvme_iov_md": false 00:31:20.370 }, 00:31:20.370 "memory_domains": [ 00:31:20.370 { 00:31:20.370 "dma_device_id": "system", 00:31:20.370 "dma_device_type": 1 00:31:20.370 }, 00:31:20.370 { 00:31:20.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.370 "dma_device_type": 2 00:31:20.370 } 00:31:20.370 ], 00:31:20.370 "driver_specific": {} 00:31:20.370 }' 00:31:20.370 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.370 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.370 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:20.370 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:20.370 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:20.629 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:20.629 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:20.629 18:58:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:20.629 18:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:20.629 18:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.629 18:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:20.629 18:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:20.629 18:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:20.888 [2024-07-25 18:58:21.294564] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:20.888 [2024-07-25 18:58:21.294749] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:20.888 [2024-07-25 18:58:21.294955] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:20.888 [2024-07-25 18:58:21.295330] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:20.888 [2024-07-25 18:58:21.295425] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 149156 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 149156 ']' 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 149156 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 149156 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 149156' 00:31:20.888 killing process with pid 149156 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 149156 00:31:20.888 [2024-07-25 18:58:21.343399] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:20.888 18:58:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 149156 00:31:21.147 [2024-07-25 18:58:21.594782] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:22.527 ************************************ 00:31:22.527 END TEST raid5f_state_function_test 00:31:22.527 ************************************ 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:31:22.527 00:31:22.527 real 0m27.682s 00:31:22.527 user 0m49.487s 00:31:22.527 sys 0m4.614s 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.527 18:58:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:31:22.527 18:58:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:22.527 18:58:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.527 18:58:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:22.527 ************************************ 00:31:22.527 START TEST raid5f_state_function_test_sb 00:31:22.527 ************************************ 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=150106 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 150106' 00:31:22.527 Process raid pid: 150106 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 150106 /var/tmp/spdk-raid.sock 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 150106 ']' 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:22.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:22.527 18:58:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.527 [2024-07-25 18:58:22.962457] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:22.527 [2024-07-25 18:58:22.962932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.786 [2024-07-25 18:58:23.150095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.044 [2024-07-25 18:58:23.410045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.044 [2024-07-25 18:58:23.600626] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:23.301 18:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.301 18:58:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:31:23.301 18:58:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:23.559 [2024-07-25 18:58:24.083346] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:23.559 [2024-07-25 18:58:24.083609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:23.559 [2024-07-25 18:58:24.083736] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:23.559 [2024-07-25 18:58:24.083798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:23.559 [2024-07-25 18:58:24.083866] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:23.559 [2024-07-25 18:58:24.083910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.559 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:23.816 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:23.816 "name": "Existed_Raid", 00:31:23.816 "uuid": "caa08f03-0a4c-49f2-9fd8-671a1b5dc63c", 00:31:23.816 "strip_size_kb": 64, 00:31:23.816 "state": "configuring", 00:31:23.816 "raid_level": "raid5f", 00:31:23.816 "superblock": true, 00:31:23.816 "num_base_bdevs": 3, 00:31:23.816 "num_base_bdevs_discovered": 0, 00:31:23.816 "num_base_bdevs_operational": 3, 00:31:23.816 "base_bdevs_list": [ 00:31:23.816 { 00:31:23.816 "name": "BaseBdev1", 00:31:23.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.816 "is_configured": false, 00:31:23.816 "data_offset": 0, 00:31:23.816 "data_size": 0 00:31:23.816 }, 00:31:23.816 { 00:31:23.816 "name": "BaseBdev2", 00:31:23.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.816 "is_configured": false, 00:31:23.816 "data_offset": 0, 00:31:23.816 "data_size": 0 00:31:23.816 }, 00:31:23.816 { 00:31:23.816 "name": "BaseBdev3", 00:31:23.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.816 "is_configured": false, 00:31:23.816 "data_offset": 0, 00:31:23.816 "data_size": 0 00:31:23.816 } 00:31:23.816 ] 00:31:23.816 }' 00:31:23.816 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:23.816 18:58:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.384 18:58:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:24.643 [2024-07-25 18:58:25.027410] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:24.643 [2024-07-25 18:58:25.027577] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:31:24.643 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:24.643 [2024-07-25 18:58:25.203486] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:24.643 [2024-07-25 18:58:25.203721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:24.643 [2024-07-25 18:58:25.203810] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:24.643 [2024-07-25 18:58:25.203868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:24.643 [2024-07-25 18:58:25.204094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:24.643 [2024-07-25 18:58:25.204152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:24.643 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:24.903 [2024-07-25 18:58:25.478313] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:24.903 BaseBdev1 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:25.162 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:25.421 [ 00:31:25.421 { 00:31:25.421 "name": "BaseBdev1", 00:31:25.421 "aliases": [ 00:31:25.421 "3c673260-8e0b-4ac3-9220-3ff645d89d50" 00:31:25.421 ], 00:31:25.421 "product_name": "Malloc disk", 00:31:25.421 "block_size": 512, 00:31:25.421 "num_blocks": 65536, 00:31:25.421 "uuid": "3c673260-8e0b-4ac3-9220-3ff645d89d50", 00:31:25.421 "assigned_rate_limits": { 00:31:25.421 "rw_ios_per_sec": 0, 00:31:25.421 "rw_mbytes_per_sec": 0, 00:31:25.421 "r_mbytes_per_sec": 0, 00:31:25.421 "w_mbytes_per_sec": 0 00:31:25.421 }, 00:31:25.421 "claimed": true, 00:31:25.421 "claim_type": "exclusive_write", 00:31:25.421 "zoned": false, 00:31:25.421 "supported_io_types": { 00:31:25.421 "read": true, 00:31:25.421 "write": true, 00:31:25.421 "unmap": true, 00:31:25.421 "flush": true, 00:31:25.421 "reset": true, 00:31:25.421 "nvme_admin": false, 00:31:25.421 "nvme_io": false, 00:31:25.421 "nvme_io_md": false, 00:31:25.421 "write_zeroes": true, 00:31:25.421 "zcopy": true, 00:31:25.421 "get_zone_info": false, 00:31:25.421 "zone_management": false, 00:31:25.421 "zone_append": false, 00:31:25.421 "compare": false, 00:31:25.421 "compare_and_write": false, 00:31:25.421 "abort": true, 00:31:25.421 "seek_hole": false, 00:31:25.421 "seek_data": false, 00:31:25.421 "copy": true, 00:31:25.421 "nvme_iov_md": false 00:31:25.421 }, 00:31:25.421 "memory_domains": [ 00:31:25.421 { 00:31:25.421 "dma_device_id": "system", 00:31:25.421 "dma_device_type": 1 00:31:25.421 }, 00:31:25.421 { 00:31:25.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.421 "dma_device_type": 2 00:31:25.421 } 00:31:25.421 ], 00:31:25.421 "driver_specific": {} 00:31:25.421 } 00:31:25.421 ] 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:25.421 18:58:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.680 18:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:25.680 "name": "Existed_Raid", 00:31:25.680 "uuid": "66637c1e-d54c-41bb-a007-28f6bdf0950a", 00:31:25.680 "strip_size_kb": 64, 00:31:25.680 "state": "configuring", 00:31:25.680 "raid_level": "raid5f", 00:31:25.680 "superblock": true, 00:31:25.680 "num_base_bdevs": 3, 00:31:25.680 "num_base_bdevs_discovered": 1, 00:31:25.680 "num_base_bdevs_operational": 3, 00:31:25.680 "base_bdevs_list": [ 00:31:25.680 { 00:31:25.680 "name": "BaseBdev1", 00:31:25.680 "uuid": "3c673260-8e0b-4ac3-9220-3ff645d89d50", 00:31:25.680 "is_configured": true, 00:31:25.680 "data_offset": 2048, 00:31:25.680 "data_size": 63488 00:31:25.680 }, 00:31:25.680 { 00:31:25.680 "name": "BaseBdev2", 00:31:25.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.680 "is_configured": false, 00:31:25.680 "data_offset": 0, 00:31:25.680 "data_size": 0 00:31:25.680 }, 00:31:25.680 { 00:31:25.680 "name": "BaseBdev3", 00:31:25.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.680 "is_configured": false, 00:31:25.680 "data_offset": 0, 00:31:25.680 "data_size": 0 00:31:25.680 } 00:31:25.680 ] 00:31:25.680 }' 00:31:25.680 18:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:25.680 18:58:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.248 18:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:26.507 [2024-07-25 18:58:26.894559] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:26.507 [2024-07-25 18:58:26.894807] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:31:26.507 18:58:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:26.766 [2024-07-25 18:58:27.158722] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:26.766 [2024-07-25 18:58:27.161170] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:26.766 [2024-07-25 18:58:27.161382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:26.766 [2024-07-25 18:58:27.161499] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:26.766 [2024-07-25 18:58:27.161587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.766 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:27.025 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.025 "name": "Existed_Raid", 00:31:27.025 "uuid": "2bc7af08-44a8-44a9-8152-4e23bc22becc", 00:31:27.025 "strip_size_kb": 64, 00:31:27.025 "state": "configuring", 00:31:27.025 "raid_level": "raid5f", 00:31:27.025 "superblock": true, 00:31:27.025 "num_base_bdevs": 3, 00:31:27.025 "num_base_bdevs_discovered": 1, 00:31:27.025 "num_base_bdevs_operational": 3, 00:31:27.025 "base_bdevs_list": [ 00:31:27.025 { 00:31:27.025 "name": "BaseBdev1", 00:31:27.025 "uuid": "3c673260-8e0b-4ac3-9220-3ff645d89d50", 00:31:27.025 "is_configured": true, 00:31:27.025 "data_offset": 2048, 00:31:27.025 "data_size": 63488 00:31:27.025 }, 00:31:27.025 { 00:31:27.025 "name": "BaseBdev2", 00:31:27.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.025 "is_configured": false, 00:31:27.025 "data_offset": 0, 00:31:27.025 "data_size": 0 00:31:27.025 }, 00:31:27.025 { 00:31:27.025 "name": "BaseBdev3", 00:31:27.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.025 "is_configured": false, 00:31:27.025 "data_offset": 0, 00:31:27.025 "data_size": 0 00:31:27.025 } 00:31:27.025 ] 00:31:27.025 }' 00:31:27.025 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.025 18:58:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.594 18:58:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:27.853 [2024-07-25 18:58:28.326904] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:27.853 BaseBdev2 00:31:27.853 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:27.853 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:27.853 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:27.853 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:27.853 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:27.853 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:27.853 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:28.112 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:28.371 [ 00:31:28.371 { 00:31:28.371 "name": "BaseBdev2", 00:31:28.371 "aliases": [ 00:31:28.371 "1e106ca1-4c33-4868-8db3-ca4201c849d1" 00:31:28.371 ], 00:31:28.371 "product_name": "Malloc disk", 00:31:28.371 "block_size": 512, 00:31:28.371 "num_blocks": 65536, 00:31:28.371 "uuid": "1e106ca1-4c33-4868-8db3-ca4201c849d1", 00:31:28.371 "assigned_rate_limits": { 00:31:28.372 "rw_ios_per_sec": 0, 00:31:28.372 "rw_mbytes_per_sec": 0, 00:31:28.372 "r_mbytes_per_sec": 0, 00:31:28.372 "w_mbytes_per_sec": 0 00:31:28.372 }, 00:31:28.372 "claimed": true, 00:31:28.372 "claim_type": "exclusive_write", 00:31:28.372 "zoned": false, 00:31:28.372 "supported_io_types": { 00:31:28.372 "read": true, 00:31:28.372 "write": true, 00:31:28.372 "unmap": true, 00:31:28.372 "flush": true, 00:31:28.372 "reset": true, 00:31:28.372 "nvme_admin": false, 00:31:28.372 "nvme_io": false, 00:31:28.372 "nvme_io_md": false, 00:31:28.372 "write_zeroes": true, 00:31:28.372 "zcopy": true, 00:31:28.372 "get_zone_info": false, 00:31:28.372 "zone_management": false, 00:31:28.372 "zone_append": false, 00:31:28.372 "compare": false, 00:31:28.372 "compare_and_write": false, 00:31:28.372 "abort": true, 00:31:28.372 "seek_hole": false, 00:31:28.372 "seek_data": false, 00:31:28.372 "copy": true, 00:31:28.372 "nvme_iov_md": false 00:31:28.372 }, 00:31:28.372 "memory_domains": [ 00:31:28.372 { 00:31:28.372 "dma_device_id": "system", 00:31:28.372 "dma_device_type": 1 00:31:28.372 }, 00:31:28.372 { 00:31:28.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:28.372 "dma_device_type": 2 00:31:28.372 } 00:31:28.372 ], 00:31:28.372 "driver_specific": {} 00:31:28.372 } 00:31:28.372 ] 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.372 18:58:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:28.631 18:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.631 "name": "Existed_Raid", 00:31:28.631 "uuid": "2bc7af08-44a8-44a9-8152-4e23bc22becc", 00:31:28.631 "strip_size_kb": 64, 00:31:28.631 "state": "configuring", 00:31:28.631 "raid_level": "raid5f", 00:31:28.631 "superblock": true, 00:31:28.631 "num_base_bdevs": 3, 00:31:28.631 "num_base_bdevs_discovered": 2, 00:31:28.631 "num_base_bdevs_operational": 3, 00:31:28.631 "base_bdevs_list": [ 00:31:28.631 { 00:31:28.631 "name": "BaseBdev1", 00:31:28.631 "uuid": "3c673260-8e0b-4ac3-9220-3ff645d89d50", 00:31:28.631 "is_configured": true, 00:31:28.631 "data_offset": 2048, 00:31:28.631 "data_size": 63488 00:31:28.631 }, 00:31:28.631 { 00:31:28.631 "name": "BaseBdev2", 00:31:28.631 "uuid": "1e106ca1-4c33-4868-8db3-ca4201c849d1", 00:31:28.631 "is_configured": true, 00:31:28.631 "data_offset": 2048, 00:31:28.631 "data_size": 63488 00:31:28.631 }, 00:31:28.631 { 00:31:28.631 "name": "BaseBdev3", 00:31:28.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.631 "is_configured": false, 00:31:28.631 "data_offset": 0, 00:31:28.631 "data_size": 0 00:31:28.631 } 00:31:28.631 ] 00:31:28.631 }' 00:31:28.631 18:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.631 18:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.198 18:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:29.457 [2024-07-25 18:58:29.891391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:29.457 [2024-07-25 18:58:29.891859] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:31:29.457 [2024-07-25 18:58:29.891974] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:29.457 [2024-07-25 18:58:29.892137] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:31:29.457 BaseBdev3 00:31:29.457 [2024-07-25 18:58:29.896702] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:31:29.457 [2024-07-25 18:58:29.896826] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:31:29.457 [2024-07-25 18:58:29.897117] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:29.457 18:58:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:31:29.457 18:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:29.457 18:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:29.457 18:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:29.457 18:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:29.457 18:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:29.457 18:58:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:29.716 18:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:29.981 [ 00:31:29.981 { 00:31:29.981 "name": "BaseBdev3", 00:31:29.981 "aliases": [ 00:31:29.981 "dfa6f442-7a9e-4554-b6a7-895621157709" 00:31:29.981 ], 00:31:29.981 "product_name": "Malloc disk", 00:31:29.981 "block_size": 512, 00:31:29.981 "num_blocks": 65536, 00:31:29.981 "uuid": "dfa6f442-7a9e-4554-b6a7-895621157709", 00:31:29.981 "assigned_rate_limits": { 00:31:29.981 "rw_ios_per_sec": 0, 00:31:29.981 "rw_mbytes_per_sec": 0, 00:31:29.981 "r_mbytes_per_sec": 0, 00:31:29.981 "w_mbytes_per_sec": 0 00:31:29.981 }, 00:31:29.981 "claimed": true, 00:31:29.981 "claim_type": "exclusive_write", 00:31:29.981 "zoned": false, 00:31:29.981 "supported_io_types": { 00:31:29.981 "read": true, 00:31:29.981 "write": true, 00:31:29.981 "unmap": true, 00:31:29.981 "flush": true, 00:31:29.981 "reset": true, 00:31:29.981 "nvme_admin": false, 00:31:29.981 "nvme_io": false, 00:31:29.981 "nvme_io_md": false, 00:31:29.981 "write_zeroes": true, 00:31:29.981 "zcopy": true, 00:31:29.981 "get_zone_info": false, 00:31:29.981 "zone_management": false, 00:31:29.981 "zone_append": false, 00:31:29.981 "compare": false, 00:31:29.981 "compare_and_write": false, 00:31:29.981 "abort": true, 00:31:29.981 "seek_hole": false, 00:31:29.981 "seek_data": false, 00:31:29.981 "copy": true, 00:31:29.981 "nvme_iov_md": false 00:31:29.981 }, 00:31:29.981 "memory_domains": [ 00:31:29.981 { 00:31:29.981 "dma_device_id": "system", 00:31:29.981 "dma_device_type": 1 00:31:29.981 }, 00:31:29.981 { 00:31:29.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.981 "dma_device_type": 2 00:31:29.981 } 00:31:29.981 ], 00:31:29.981 "driver_specific": {} 00:31:29.981 } 00:31:29.981 ] 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:29.981 "name": "Existed_Raid", 00:31:29.981 "uuid": "2bc7af08-44a8-44a9-8152-4e23bc22becc", 00:31:29.981 "strip_size_kb": 64, 00:31:29.981 "state": "online", 00:31:29.981 "raid_level": "raid5f", 00:31:29.981 "superblock": true, 00:31:29.981 "num_base_bdevs": 3, 00:31:29.981 "num_base_bdevs_discovered": 3, 00:31:29.981 "num_base_bdevs_operational": 3, 00:31:29.981 "base_bdevs_list": [ 00:31:29.981 { 00:31:29.981 "name": "BaseBdev1", 00:31:29.981 "uuid": "3c673260-8e0b-4ac3-9220-3ff645d89d50", 00:31:29.981 "is_configured": true, 00:31:29.981 "data_offset": 2048, 00:31:29.981 "data_size": 63488 00:31:29.981 }, 00:31:29.981 { 00:31:29.981 "name": "BaseBdev2", 00:31:29.981 "uuid": "1e106ca1-4c33-4868-8db3-ca4201c849d1", 00:31:29.981 "is_configured": true, 00:31:29.981 "data_offset": 2048, 00:31:29.981 "data_size": 63488 00:31:29.981 }, 00:31:29.981 { 00:31:29.981 "name": "BaseBdev3", 00:31:29.981 "uuid": "dfa6f442-7a9e-4554-b6a7-895621157709", 00:31:29.981 "is_configured": true, 00:31:29.981 "data_offset": 2048, 00:31:29.981 "data_size": 63488 00:31:29.981 } 00:31:29.981 ] 00:31:29.981 }' 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:29.981 18:58:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:30.591 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:30.849 [2024-07-25 18:58:31.311858] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:30.849 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:30.849 "name": "Existed_Raid", 00:31:30.849 "aliases": [ 00:31:30.849 "2bc7af08-44a8-44a9-8152-4e23bc22becc" 00:31:30.849 ], 00:31:30.849 "product_name": "Raid Volume", 00:31:30.849 "block_size": 512, 00:31:30.849 "num_blocks": 126976, 00:31:30.849 "uuid": "2bc7af08-44a8-44a9-8152-4e23bc22becc", 00:31:30.849 "assigned_rate_limits": { 00:31:30.849 "rw_ios_per_sec": 0, 00:31:30.849 "rw_mbytes_per_sec": 0, 00:31:30.849 "r_mbytes_per_sec": 0, 00:31:30.849 "w_mbytes_per_sec": 0 00:31:30.849 }, 00:31:30.849 "claimed": false, 00:31:30.849 "zoned": false, 00:31:30.849 "supported_io_types": { 00:31:30.849 "read": true, 00:31:30.849 "write": true, 00:31:30.849 "unmap": false, 00:31:30.849 "flush": false, 00:31:30.849 "reset": true, 00:31:30.849 "nvme_admin": false, 00:31:30.849 "nvme_io": false, 00:31:30.849 "nvme_io_md": false, 00:31:30.849 "write_zeroes": true, 00:31:30.849 "zcopy": false, 00:31:30.849 "get_zone_info": false, 00:31:30.849 "zone_management": false, 00:31:30.849 "zone_append": false, 00:31:30.849 "compare": false, 00:31:30.849 "compare_and_write": false, 00:31:30.849 "abort": false, 00:31:30.849 "seek_hole": false, 00:31:30.849 "seek_data": false, 00:31:30.849 "copy": false, 00:31:30.849 "nvme_iov_md": false 00:31:30.849 }, 00:31:30.849 "driver_specific": { 00:31:30.849 "raid": { 00:31:30.849 "uuid": "2bc7af08-44a8-44a9-8152-4e23bc22becc", 00:31:30.849 "strip_size_kb": 64, 00:31:30.849 "state": "online", 00:31:30.849 "raid_level": "raid5f", 00:31:30.849 "superblock": true, 00:31:30.849 "num_base_bdevs": 3, 00:31:30.849 "num_base_bdevs_discovered": 3, 00:31:30.849 "num_base_bdevs_operational": 3, 00:31:30.849 "base_bdevs_list": [ 00:31:30.849 { 00:31:30.849 "name": "BaseBdev1", 00:31:30.849 "uuid": "3c673260-8e0b-4ac3-9220-3ff645d89d50", 00:31:30.849 "is_configured": true, 00:31:30.849 "data_offset": 2048, 00:31:30.849 "data_size": 63488 00:31:30.849 }, 00:31:30.849 { 00:31:30.849 "name": "BaseBdev2", 00:31:30.849 "uuid": "1e106ca1-4c33-4868-8db3-ca4201c849d1", 00:31:30.849 "is_configured": true, 00:31:30.849 "data_offset": 2048, 00:31:30.849 "data_size": 63488 00:31:30.849 }, 00:31:30.849 { 00:31:30.849 "name": "BaseBdev3", 00:31:30.849 "uuid": "dfa6f442-7a9e-4554-b6a7-895621157709", 00:31:30.849 "is_configured": true, 00:31:30.849 "data_offset": 2048, 00:31:30.849 "data_size": 63488 00:31:30.849 } 00:31:30.849 ] 00:31:30.849 } 00:31:30.849 } 00:31:30.849 }' 00:31:30.849 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:30.849 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:30.849 BaseBdev2 00:31:30.849 BaseBdev3' 00:31:30.849 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:30.849 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:30.849 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:31.108 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:31.108 "name": "BaseBdev1", 00:31:31.108 "aliases": [ 00:31:31.108 "3c673260-8e0b-4ac3-9220-3ff645d89d50" 00:31:31.108 ], 00:31:31.108 "product_name": "Malloc disk", 00:31:31.108 "block_size": 512, 00:31:31.108 "num_blocks": 65536, 00:31:31.108 "uuid": "3c673260-8e0b-4ac3-9220-3ff645d89d50", 00:31:31.108 "assigned_rate_limits": { 00:31:31.108 "rw_ios_per_sec": 0, 00:31:31.108 "rw_mbytes_per_sec": 0, 00:31:31.108 "r_mbytes_per_sec": 0, 00:31:31.108 "w_mbytes_per_sec": 0 00:31:31.108 }, 00:31:31.108 "claimed": true, 00:31:31.108 "claim_type": "exclusive_write", 00:31:31.108 "zoned": false, 00:31:31.108 "supported_io_types": { 00:31:31.108 "read": true, 00:31:31.108 "write": true, 00:31:31.108 "unmap": true, 00:31:31.108 "flush": true, 00:31:31.108 "reset": true, 00:31:31.108 "nvme_admin": false, 00:31:31.108 "nvme_io": false, 00:31:31.108 "nvme_io_md": false, 00:31:31.108 "write_zeroes": true, 00:31:31.108 "zcopy": true, 00:31:31.108 "get_zone_info": false, 00:31:31.108 "zone_management": false, 00:31:31.108 "zone_append": false, 00:31:31.108 "compare": false, 00:31:31.108 "compare_and_write": false, 00:31:31.108 "abort": true, 00:31:31.108 "seek_hole": false, 00:31:31.108 "seek_data": false, 00:31:31.108 "copy": true, 00:31:31.108 "nvme_iov_md": false 00:31:31.108 }, 00:31:31.108 "memory_domains": [ 00:31:31.108 { 00:31:31.108 "dma_device_id": "system", 00:31:31.108 "dma_device_type": 1 00:31:31.108 }, 00:31:31.108 { 00:31:31.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:31.108 "dma_device_type": 2 00:31:31.108 } 00:31:31.108 ], 00:31:31.108 "driver_specific": {} 00:31:31.108 }' 00:31:31.108 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:31.108 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:31.108 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:31.108 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:31.366 18:58:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:31.625 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:31.625 "name": "BaseBdev2", 00:31:31.625 "aliases": [ 00:31:31.625 "1e106ca1-4c33-4868-8db3-ca4201c849d1" 00:31:31.625 ], 00:31:31.625 "product_name": "Malloc disk", 00:31:31.625 "block_size": 512, 00:31:31.625 "num_blocks": 65536, 00:31:31.625 "uuid": "1e106ca1-4c33-4868-8db3-ca4201c849d1", 00:31:31.625 "assigned_rate_limits": { 00:31:31.625 "rw_ios_per_sec": 0, 00:31:31.625 "rw_mbytes_per_sec": 0, 00:31:31.625 "r_mbytes_per_sec": 0, 00:31:31.625 "w_mbytes_per_sec": 0 00:31:31.625 }, 00:31:31.625 "claimed": true, 00:31:31.625 "claim_type": "exclusive_write", 00:31:31.625 "zoned": false, 00:31:31.625 "supported_io_types": { 00:31:31.625 "read": true, 00:31:31.625 "write": true, 00:31:31.625 "unmap": true, 00:31:31.625 "flush": true, 00:31:31.625 "reset": true, 00:31:31.625 "nvme_admin": false, 00:31:31.625 "nvme_io": false, 00:31:31.625 "nvme_io_md": false, 00:31:31.625 "write_zeroes": true, 00:31:31.625 "zcopy": true, 00:31:31.625 "get_zone_info": false, 00:31:31.625 "zone_management": false, 00:31:31.625 "zone_append": false, 00:31:31.625 "compare": false, 00:31:31.625 "compare_and_write": false, 00:31:31.625 "abort": true, 00:31:31.625 "seek_hole": false, 00:31:31.625 "seek_data": false, 00:31:31.625 "copy": true, 00:31:31.625 "nvme_iov_md": false 00:31:31.625 }, 00:31:31.625 "memory_domains": [ 00:31:31.625 { 00:31:31.625 "dma_device_id": "system", 00:31:31.625 "dma_device_type": 1 00:31:31.625 }, 00:31:31.625 { 00:31:31.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:31.625 "dma_device_type": 2 00:31:31.625 } 00:31:31.625 ], 00:31:31.625 "driver_specific": {} 00:31:31.625 }' 00:31:31.625 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:31.625 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:31.625 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:31.625 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:31.625 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:31.884 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:32.142 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:32.142 "name": "BaseBdev3", 00:31:32.142 "aliases": [ 00:31:32.142 "dfa6f442-7a9e-4554-b6a7-895621157709" 00:31:32.142 ], 00:31:32.142 "product_name": "Malloc disk", 00:31:32.142 "block_size": 512, 00:31:32.142 "num_blocks": 65536, 00:31:32.142 "uuid": "dfa6f442-7a9e-4554-b6a7-895621157709", 00:31:32.142 "assigned_rate_limits": { 00:31:32.142 "rw_ios_per_sec": 0, 00:31:32.142 "rw_mbytes_per_sec": 0, 00:31:32.142 "r_mbytes_per_sec": 0, 00:31:32.142 "w_mbytes_per_sec": 0 00:31:32.142 }, 00:31:32.142 "claimed": true, 00:31:32.142 "claim_type": "exclusive_write", 00:31:32.142 "zoned": false, 00:31:32.142 "supported_io_types": { 00:31:32.142 "read": true, 00:31:32.142 "write": true, 00:31:32.142 "unmap": true, 00:31:32.142 "flush": true, 00:31:32.142 "reset": true, 00:31:32.142 "nvme_admin": false, 00:31:32.142 "nvme_io": false, 00:31:32.142 "nvme_io_md": false, 00:31:32.142 "write_zeroes": true, 00:31:32.142 "zcopy": true, 00:31:32.142 "get_zone_info": false, 00:31:32.142 "zone_management": false, 00:31:32.142 "zone_append": false, 00:31:32.142 "compare": false, 00:31:32.142 "compare_and_write": false, 00:31:32.142 "abort": true, 00:31:32.142 "seek_hole": false, 00:31:32.142 "seek_data": false, 00:31:32.142 "copy": true, 00:31:32.142 "nvme_iov_md": false 00:31:32.142 }, 00:31:32.142 "memory_domains": [ 00:31:32.142 { 00:31:32.142 "dma_device_id": "system", 00:31:32.142 "dma_device_type": 1 00:31:32.142 }, 00:31:32.142 { 00:31:32.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.142 "dma_device_type": 2 00:31:32.142 } 00:31:32.142 ], 00:31:32.142 "driver_specific": {} 00:31:32.142 }' 00:31:32.142 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:32.142 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:32.400 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:32.659 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:32.659 18:58:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:32.917 [2024-07-25 18:58:33.252158] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:32.917 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:32.917 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:31:32.917 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:32.917 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:32.918 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.175 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:33.175 "name": "Existed_Raid", 00:31:33.175 "uuid": "2bc7af08-44a8-44a9-8152-4e23bc22becc", 00:31:33.175 "strip_size_kb": 64, 00:31:33.175 "state": "online", 00:31:33.175 "raid_level": "raid5f", 00:31:33.175 "superblock": true, 00:31:33.175 "num_base_bdevs": 3, 00:31:33.175 "num_base_bdevs_discovered": 2, 00:31:33.175 "num_base_bdevs_operational": 2, 00:31:33.175 "base_bdevs_list": [ 00:31:33.175 { 00:31:33.175 "name": null, 00:31:33.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.175 "is_configured": false, 00:31:33.175 "data_offset": 2048, 00:31:33.175 "data_size": 63488 00:31:33.175 }, 00:31:33.175 { 00:31:33.175 "name": "BaseBdev2", 00:31:33.175 "uuid": "1e106ca1-4c33-4868-8db3-ca4201c849d1", 00:31:33.175 "is_configured": true, 00:31:33.175 "data_offset": 2048, 00:31:33.175 "data_size": 63488 00:31:33.175 }, 00:31:33.175 { 00:31:33.175 "name": "BaseBdev3", 00:31:33.175 "uuid": "dfa6f442-7a9e-4554-b6a7-895621157709", 00:31:33.175 "is_configured": true, 00:31:33.175 "data_offset": 2048, 00:31:33.175 "data_size": 63488 00:31:33.175 } 00:31:33.175 ] 00:31:33.175 }' 00:31:33.175 18:58:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:33.175 18:58:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.740 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:33.740 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:33.740 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.740 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:33.998 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:33.998 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:33.998 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:33.998 [2024-07-25 18:58:34.571675] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:33.998 [2024-07-25 18:58:34.572017] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:34.256 [2024-07-25 18:58:34.656406] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:34.256 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:34.256 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:34.256 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.256 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:34.514 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:34.514 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:34.514 18:58:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:31:34.514 [2024-07-25 18:58:35.028537] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:34.514 [2024-07-25 18:58:35.028761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:31:34.773 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:34.773 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:34.773 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.773 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:35.032 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:35.032 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:35.032 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:31:35.032 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:31:35.032 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:35.032 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:35.032 BaseBdev2 00:31:35.291 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:31:35.292 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:35.292 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:35.292 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:35.292 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:35.292 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:35.292 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:35.292 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:35.551 [ 00:31:35.551 { 00:31:35.551 "name": "BaseBdev2", 00:31:35.551 "aliases": [ 00:31:35.551 "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb" 00:31:35.551 ], 00:31:35.551 "product_name": "Malloc disk", 00:31:35.551 "block_size": 512, 00:31:35.551 "num_blocks": 65536, 00:31:35.551 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:35.551 "assigned_rate_limits": { 00:31:35.551 "rw_ios_per_sec": 0, 00:31:35.551 "rw_mbytes_per_sec": 0, 00:31:35.551 "r_mbytes_per_sec": 0, 00:31:35.551 "w_mbytes_per_sec": 0 00:31:35.551 }, 00:31:35.551 "claimed": false, 00:31:35.551 "zoned": false, 00:31:35.551 "supported_io_types": { 00:31:35.551 "read": true, 00:31:35.551 "write": true, 00:31:35.551 "unmap": true, 00:31:35.551 "flush": true, 00:31:35.551 "reset": true, 00:31:35.551 "nvme_admin": false, 00:31:35.551 "nvme_io": false, 00:31:35.551 "nvme_io_md": false, 00:31:35.551 "write_zeroes": true, 00:31:35.551 "zcopy": true, 00:31:35.551 "get_zone_info": false, 00:31:35.551 "zone_management": false, 00:31:35.551 "zone_append": false, 00:31:35.551 "compare": false, 00:31:35.551 "compare_and_write": false, 00:31:35.551 "abort": true, 00:31:35.551 "seek_hole": false, 00:31:35.551 "seek_data": false, 00:31:35.551 "copy": true, 00:31:35.551 "nvme_iov_md": false 00:31:35.551 }, 00:31:35.551 "memory_domains": [ 00:31:35.551 { 00:31:35.551 "dma_device_id": "system", 00:31:35.551 "dma_device_type": 1 00:31:35.551 }, 00:31:35.551 { 00:31:35.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.551 "dma_device_type": 2 00:31:35.551 } 00:31:35.551 ], 00:31:35.551 "driver_specific": {} 00:31:35.551 } 00:31:35.551 ] 00:31:35.551 18:58:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:35.551 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:35.551 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:35.551 18:58:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:35.810 BaseBdev3 00:31:35.810 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:31:35.810 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:35.810 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:35.810 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:35.810 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:35.810 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:35.810 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:36.069 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:36.069 [ 00:31:36.069 { 00:31:36.069 "name": "BaseBdev3", 00:31:36.069 "aliases": [ 00:31:36.069 "f340783e-f2ce-4ef7-8504-035b8525eb32" 00:31:36.069 ], 00:31:36.069 "product_name": "Malloc disk", 00:31:36.069 "block_size": 512, 00:31:36.069 "num_blocks": 65536, 00:31:36.069 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:36.069 "assigned_rate_limits": { 00:31:36.069 "rw_ios_per_sec": 0, 00:31:36.069 "rw_mbytes_per_sec": 0, 00:31:36.069 "r_mbytes_per_sec": 0, 00:31:36.069 "w_mbytes_per_sec": 0 00:31:36.069 }, 00:31:36.069 "claimed": false, 00:31:36.069 "zoned": false, 00:31:36.069 "supported_io_types": { 00:31:36.069 "read": true, 00:31:36.069 "write": true, 00:31:36.069 "unmap": true, 00:31:36.069 "flush": true, 00:31:36.069 "reset": true, 00:31:36.069 "nvme_admin": false, 00:31:36.069 "nvme_io": false, 00:31:36.069 "nvme_io_md": false, 00:31:36.069 "write_zeroes": true, 00:31:36.069 "zcopy": true, 00:31:36.069 "get_zone_info": false, 00:31:36.069 "zone_management": false, 00:31:36.069 "zone_append": false, 00:31:36.069 "compare": false, 00:31:36.069 "compare_and_write": false, 00:31:36.069 "abort": true, 00:31:36.069 "seek_hole": false, 00:31:36.069 "seek_data": false, 00:31:36.069 "copy": true, 00:31:36.069 "nvme_iov_md": false 00:31:36.069 }, 00:31:36.069 "memory_domains": [ 00:31:36.069 { 00:31:36.069 "dma_device_id": "system", 00:31:36.069 "dma_device_type": 1 00:31:36.069 }, 00:31:36.069 { 00:31:36.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.069 "dma_device_type": 2 00:31:36.069 } 00:31:36.069 ], 00:31:36.069 "driver_specific": {} 00:31:36.069 } 00:31:36.069 ] 00:31:36.069 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:36.069 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:36.069 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:36.069 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:36.328 [2024-07-25 18:58:36.779844] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:36.328 [2024-07-25 18:58:36.779936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:36.328 [2024-07-25 18:58:36.779981] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:36.328 [2024-07-25 18:58:36.782250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.328 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:36.587 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:36.587 "name": "Existed_Raid", 00:31:36.587 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:36.587 "strip_size_kb": 64, 00:31:36.587 "state": "configuring", 00:31:36.587 "raid_level": "raid5f", 00:31:36.587 "superblock": true, 00:31:36.587 "num_base_bdevs": 3, 00:31:36.587 "num_base_bdevs_discovered": 2, 00:31:36.587 "num_base_bdevs_operational": 3, 00:31:36.587 "base_bdevs_list": [ 00:31:36.587 { 00:31:36.587 "name": "BaseBdev1", 00:31:36.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.587 "is_configured": false, 00:31:36.587 "data_offset": 0, 00:31:36.587 "data_size": 0 00:31:36.587 }, 00:31:36.587 { 00:31:36.587 "name": "BaseBdev2", 00:31:36.587 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:36.587 "is_configured": true, 00:31:36.587 "data_offset": 2048, 00:31:36.587 "data_size": 63488 00:31:36.587 }, 00:31:36.587 { 00:31:36.587 "name": "BaseBdev3", 00:31:36.587 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:36.587 "is_configured": true, 00:31:36.587 "data_offset": 2048, 00:31:36.587 "data_size": 63488 00:31:36.587 } 00:31:36.587 ] 00:31:36.587 }' 00:31:36.587 18:58:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:36.587 18:58:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.154 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:37.154 [2024-07-25 18:58:37.722382] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.413 "name": "Existed_Raid", 00:31:37.413 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:37.413 "strip_size_kb": 64, 00:31:37.413 "state": "configuring", 00:31:37.413 "raid_level": "raid5f", 00:31:37.413 "superblock": true, 00:31:37.413 "num_base_bdevs": 3, 00:31:37.413 "num_base_bdevs_discovered": 1, 00:31:37.413 "num_base_bdevs_operational": 3, 00:31:37.413 "base_bdevs_list": [ 00:31:37.413 { 00:31:37.413 "name": "BaseBdev1", 00:31:37.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.413 "is_configured": false, 00:31:37.413 "data_offset": 0, 00:31:37.413 "data_size": 0 00:31:37.413 }, 00:31:37.413 { 00:31:37.413 "name": null, 00:31:37.413 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:37.413 "is_configured": false, 00:31:37.413 "data_offset": 2048, 00:31:37.413 "data_size": 63488 00:31:37.413 }, 00:31:37.413 { 00:31:37.413 "name": "BaseBdev3", 00:31:37.413 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:37.413 "is_configured": true, 00:31:37.413 "data_offset": 2048, 00:31:37.413 "data_size": 63488 00:31:37.413 } 00:31:37.413 ] 00:31:37.413 }' 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.413 18:58:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.980 18:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.980 18:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:38.239 18:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:31:38.239 18:58:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:38.498 [2024-07-25 18:58:39.023482] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:38.498 BaseBdev1 00:31:38.498 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:31:38.498 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:38.498 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:38.498 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:38.498 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:38.498 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:38.498 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:38.757 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:39.016 [ 00:31:39.016 { 00:31:39.016 "name": "BaseBdev1", 00:31:39.016 "aliases": [ 00:31:39.016 "16c61c1e-42ca-44b9-9d23-9342fb139ab7" 00:31:39.016 ], 00:31:39.016 "product_name": "Malloc disk", 00:31:39.016 "block_size": 512, 00:31:39.016 "num_blocks": 65536, 00:31:39.016 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:39.016 "assigned_rate_limits": { 00:31:39.016 "rw_ios_per_sec": 0, 00:31:39.016 "rw_mbytes_per_sec": 0, 00:31:39.016 "r_mbytes_per_sec": 0, 00:31:39.016 "w_mbytes_per_sec": 0 00:31:39.016 }, 00:31:39.016 "claimed": true, 00:31:39.016 "claim_type": "exclusive_write", 00:31:39.016 "zoned": false, 00:31:39.016 "supported_io_types": { 00:31:39.016 "read": true, 00:31:39.016 "write": true, 00:31:39.016 "unmap": true, 00:31:39.016 "flush": true, 00:31:39.016 "reset": true, 00:31:39.016 "nvme_admin": false, 00:31:39.016 "nvme_io": false, 00:31:39.016 "nvme_io_md": false, 00:31:39.016 "write_zeroes": true, 00:31:39.016 "zcopy": true, 00:31:39.016 "get_zone_info": false, 00:31:39.016 "zone_management": false, 00:31:39.016 "zone_append": false, 00:31:39.016 "compare": false, 00:31:39.016 "compare_and_write": false, 00:31:39.016 "abort": true, 00:31:39.016 "seek_hole": false, 00:31:39.016 "seek_data": false, 00:31:39.016 "copy": true, 00:31:39.016 "nvme_iov_md": false 00:31:39.016 }, 00:31:39.016 "memory_domains": [ 00:31:39.016 { 00:31:39.016 "dma_device_id": "system", 00:31:39.016 "dma_device_type": 1 00:31:39.016 }, 00:31:39.016 { 00:31:39.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:39.016 "dma_device_type": 2 00:31:39.016 } 00:31:39.016 ], 00:31:39.016 "driver_specific": {} 00:31:39.016 } 00:31:39.016 ] 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.016 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:39.276 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:39.276 "name": "Existed_Raid", 00:31:39.276 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:39.276 "strip_size_kb": 64, 00:31:39.276 "state": "configuring", 00:31:39.276 "raid_level": "raid5f", 00:31:39.276 "superblock": true, 00:31:39.276 "num_base_bdevs": 3, 00:31:39.276 "num_base_bdevs_discovered": 2, 00:31:39.276 "num_base_bdevs_operational": 3, 00:31:39.276 "base_bdevs_list": [ 00:31:39.276 { 00:31:39.276 "name": "BaseBdev1", 00:31:39.276 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:39.276 "is_configured": true, 00:31:39.276 "data_offset": 2048, 00:31:39.276 "data_size": 63488 00:31:39.276 }, 00:31:39.276 { 00:31:39.276 "name": null, 00:31:39.276 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:39.276 "is_configured": false, 00:31:39.276 "data_offset": 2048, 00:31:39.276 "data_size": 63488 00:31:39.276 }, 00:31:39.276 { 00:31:39.276 "name": "BaseBdev3", 00:31:39.276 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:39.276 "is_configured": true, 00:31:39.276 "data_offset": 2048, 00:31:39.276 "data_size": 63488 00:31:39.276 } 00:31:39.276 ] 00:31:39.276 }' 00:31:39.276 18:58:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:39.276 18:58:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.843 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:39.843 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.844 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:31:39.844 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:31:40.102 [2024-07-25 18:58:40.564234] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.102 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:40.361 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:40.361 "name": "Existed_Raid", 00:31:40.361 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:40.361 "strip_size_kb": 64, 00:31:40.361 "state": "configuring", 00:31:40.361 "raid_level": "raid5f", 00:31:40.361 "superblock": true, 00:31:40.361 "num_base_bdevs": 3, 00:31:40.361 "num_base_bdevs_discovered": 1, 00:31:40.361 "num_base_bdevs_operational": 3, 00:31:40.361 "base_bdevs_list": [ 00:31:40.361 { 00:31:40.361 "name": "BaseBdev1", 00:31:40.361 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:40.361 "is_configured": true, 00:31:40.361 "data_offset": 2048, 00:31:40.361 "data_size": 63488 00:31:40.361 }, 00:31:40.361 { 00:31:40.361 "name": null, 00:31:40.361 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:40.361 "is_configured": false, 00:31:40.361 "data_offset": 2048, 00:31:40.361 "data_size": 63488 00:31:40.361 }, 00:31:40.361 { 00:31:40.361 "name": null, 00:31:40.361 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:40.361 "is_configured": false, 00:31:40.361 "data_offset": 2048, 00:31:40.361 "data_size": 63488 00:31:40.361 } 00:31:40.361 ] 00:31:40.361 }' 00:31:40.361 18:58:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:40.361 18:58:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.929 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.929 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:40.929 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:31:40.929 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:41.187 [2024-07-25 18:58:41.728469] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.187 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:41.517 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:41.517 "name": "Existed_Raid", 00:31:41.517 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:41.517 "strip_size_kb": 64, 00:31:41.517 "state": "configuring", 00:31:41.517 "raid_level": "raid5f", 00:31:41.517 "superblock": true, 00:31:41.517 "num_base_bdevs": 3, 00:31:41.517 "num_base_bdevs_discovered": 2, 00:31:41.517 "num_base_bdevs_operational": 3, 00:31:41.517 "base_bdevs_list": [ 00:31:41.517 { 00:31:41.517 "name": "BaseBdev1", 00:31:41.517 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:41.517 "is_configured": true, 00:31:41.517 "data_offset": 2048, 00:31:41.517 "data_size": 63488 00:31:41.517 }, 00:31:41.517 { 00:31:41.517 "name": null, 00:31:41.517 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:41.517 "is_configured": false, 00:31:41.517 "data_offset": 2048, 00:31:41.517 "data_size": 63488 00:31:41.517 }, 00:31:41.517 { 00:31:41.517 "name": "BaseBdev3", 00:31:41.517 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:41.517 "is_configured": true, 00:31:41.517 "data_offset": 2048, 00:31:41.517 "data_size": 63488 00:31:41.517 } 00:31:41.517 ] 00:31:41.517 }' 00:31:41.517 18:58:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:41.517 18:58:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.084 18:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.084 18:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:42.344 18:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:31:42.344 18:58:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:42.344 [2024-07-25 18:58:42.918372] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.603 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:42.862 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:42.862 "name": "Existed_Raid", 00:31:42.862 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:42.862 "strip_size_kb": 64, 00:31:42.862 "state": "configuring", 00:31:42.862 "raid_level": "raid5f", 00:31:42.862 "superblock": true, 00:31:42.862 "num_base_bdevs": 3, 00:31:42.862 "num_base_bdevs_discovered": 1, 00:31:42.862 "num_base_bdevs_operational": 3, 00:31:42.862 "base_bdevs_list": [ 00:31:42.862 { 00:31:42.862 "name": null, 00:31:42.862 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:42.862 "is_configured": false, 00:31:42.862 "data_offset": 2048, 00:31:42.862 "data_size": 63488 00:31:42.862 }, 00:31:42.862 { 00:31:42.862 "name": null, 00:31:42.862 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:42.863 "is_configured": false, 00:31:42.863 "data_offset": 2048, 00:31:42.863 "data_size": 63488 00:31:42.863 }, 00:31:42.863 { 00:31:42.863 "name": "BaseBdev3", 00:31:42.863 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:42.863 "is_configured": true, 00:31:42.863 "data_offset": 2048, 00:31:42.863 "data_size": 63488 00:31:42.863 } 00:31:42.863 ] 00:31:42.863 }' 00:31:42.863 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:42.863 18:58:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.430 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:43.430 18:58:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.688 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:31:43.688 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:43.947 [2024-07-25 18:58:44.274097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:43.947 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:43.948 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:43.948 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:43.948 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.948 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.948 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:43.948 "name": "Existed_Raid", 00:31:43.948 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:43.948 "strip_size_kb": 64, 00:31:43.948 "state": "configuring", 00:31:43.948 "raid_level": "raid5f", 00:31:43.948 "superblock": true, 00:31:43.948 "num_base_bdevs": 3, 00:31:43.948 "num_base_bdevs_discovered": 2, 00:31:43.948 "num_base_bdevs_operational": 3, 00:31:43.948 "base_bdevs_list": [ 00:31:43.948 { 00:31:43.948 "name": null, 00:31:43.948 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:43.948 "is_configured": false, 00:31:43.948 "data_offset": 2048, 00:31:43.948 "data_size": 63488 00:31:43.948 }, 00:31:43.948 { 00:31:43.948 "name": "BaseBdev2", 00:31:43.948 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:43.948 "is_configured": true, 00:31:43.948 "data_offset": 2048, 00:31:43.948 "data_size": 63488 00:31:43.948 }, 00:31:43.948 { 00:31:43.948 "name": "BaseBdev3", 00:31:43.948 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:43.948 "is_configured": true, 00:31:43.948 "data_offset": 2048, 00:31:43.948 "data_size": 63488 00:31:43.948 } 00:31:43.948 ] 00:31:43.948 }' 00:31:43.948 18:58:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:43.948 18:58:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.563 18:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.563 18:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:44.822 18:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:31:44.822 18:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.822 18:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:45.082 18:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 16c61c1e-42ca-44b9-9d23-9342fb139ab7 00:31:45.341 [2024-07-25 18:58:45.664124] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:45.341 [2024-07-25 18:58:45.664384] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:31:45.341 [2024-07-25 18:58:45.664397] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:45.341 [2024-07-25 18:58:45.664491] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:45.341 NewBaseBdev 00:31:45.341 [2024-07-25 18:58:45.668410] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:31:45.341 [2024-07-25 18:58:45.668435] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:31:45.341 [2024-07-25 18:58:45.668603] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:45.341 18:58:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:31:45.341 18:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:45.341 18:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:45.341 18:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:45.341 18:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:45.341 18:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:45.341 18:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:45.600 18:58:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:45.600 [ 00:31:45.600 { 00:31:45.600 "name": "NewBaseBdev", 00:31:45.600 "aliases": [ 00:31:45.600 "16c61c1e-42ca-44b9-9d23-9342fb139ab7" 00:31:45.600 ], 00:31:45.600 "product_name": "Malloc disk", 00:31:45.600 "block_size": 512, 00:31:45.600 "num_blocks": 65536, 00:31:45.600 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:45.600 "assigned_rate_limits": { 00:31:45.600 "rw_ios_per_sec": 0, 00:31:45.600 "rw_mbytes_per_sec": 0, 00:31:45.600 "r_mbytes_per_sec": 0, 00:31:45.600 "w_mbytes_per_sec": 0 00:31:45.600 }, 00:31:45.600 "claimed": true, 00:31:45.600 "claim_type": "exclusive_write", 00:31:45.600 "zoned": false, 00:31:45.600 "supported_io_types": { 00:31:45.600 "read": true, 00:31:45.600 "write": true, 00:31:45.600 "unmap": true, 00:31:45.600 "flush": true, 00:31:45.600 "reset": true, 00:31:45.600 "nvme_admin": false, 00:31:45.600 "nvme_io": false, 00:31:45.600 "nvme_io_md": false, 00:31:45.600 "write_zeroes": true, 00:31:45.600 "zcopy": true, 00:31:45.600 "get_zone_info": false, 00:31:45.600 "zone_management": false, 00:31:45.600 "zone_append": false, 00:31:45.600 "compare": false, 00:31:45.600 "compare_and_write": false, 00:31:45.600 "abort": true, 00:31:45.600 "seek_hole": false, 00:31:45.600 "seek_data": false, 00:31:45.600 "copy": true, 00:31:45.600 "nvme_iov_md": false 00:31:45.600 }, 00:31:45.600 "memory_domains": [ 00:31:45.600 { 00:31:45.600 "dma_device_id": "system", 00:31:45.600 "dma_device_type": 1 00:31:45.600 }, 00:31:45.600 { 00:31:45.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.600 "dma_device_type": 2 00:31:45.600 } 00:31:45.600 ], 00:31:45.600 "driver_specific": {} 00:31:45.600 } 00:31:45.600 ] 00:31:45.600 18:58:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.601 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:45.860 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:45.860 "name": "Existed_Raid", 00:31:45.860 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:45.860 "strip_size_kb": 64, 00:31:45.860 "state": "online", 00:31:45.860 "raid_level": "raid5f", 00:31:45.860 "superblock": true, 00:31:45.860 "num_base_bdevs": 3, 00:31:45.860 "num_base_bdevs_discovered": 3, 00:31:45.860 "num_base_bdevs_operational": 3, 00:31:45.860 "base_bdevs_list": [ 00:31:45.860 { 00:31:45.860 "name": "NewBaseBdev", 00:31:45.860 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:45.860 "is_configured": true, 00:31:45.860 "data_offset": 2048, 00:31:45.860 "data_size": 63488 00:31:45.860 }, 00:31:45.860 { 00:31:45.860 "name": "BaseBdev2", 00:31:45.860 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:45.860 "is_configured": true, 00:31:45.860 "data_offset": 2048, 00:31:45.860 "data_size": 63488 00:31:45.860 }, 00:31:45.860 { 00:31:45.860 "name": "BaseBdev3", 00:31:45.860 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:45.860 "is_configured": true, 00:31:45.860 "data_offset": 2048, 00:31:45.860 "data_size": 63488 00:31:45.860 } 00:31:45.860 ] 00:31:45.860 }' 00:31:45.860 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:45.860 18:58:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:46.429 18:58:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:46.688 [2024-07-25 18:58:47.043516] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:46.688 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:46.688 "name": "Existed_Raid", 00:31:46.688 "aliases": [ 00:31:46.688 "ee208269-37e4-4fed-b055-d1ac3afcb8a1" 00:31:46.688 ], 00:31:46.688 "product_name": "Raid Volume", 00:31:46.688 "block_size": 512, 00:31:46.688 "num_blocks": 126976, 00:31:46.688 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:46.688 "assigned_rate_limits": { 00:31:46.688 "rw_ios_per_sec": 0, 00:31:46.688 "rw_mbytes_per_sec": 0, 00:31:46.688 "r_mbytes_per_sec": 0, 00:31:46.688 "w_mbytes_per_sec": 0 00:31:46.688 }, 00:31:46.688 "claimed": false, 00:31:46.688 "zoned": false, 00:31:46.688 "supported_io_types": { 00:31:46.688 "read": true, 00:31:46.688 "write": true, 00:31:46.688 "unmap": false, 00:31:46.688 "flush": false, 00:31:46.688 "reset": true, 00:31:46.688 "nvme_admin": false, 00:31:46.688 "nvme_io": false, 00:31:46.688 "nvme_io_md": false, 00:31:46.688 "write_zeroes": true, 00:31:46.688 "zcopy": false, 00:31:46.688 "get_zone_info": false, 00:31:46.688 "zone_management": false, 00:31:46.688 "zone_append": false, 00:31:46.688 "compare": false, 00:31:46.688 "compare_and_write": false, 00:31:46.688 "abort": false, 00:31:46.688 "seek_hole": false, 00:31:46.688 "seek_data": false, 00:31:46.688 "copy": false, 00:31:46.688 "nvme_iov_md": false 00:31:46.688 }, 00:31:46.688 "driver_specific": { 00:31:46.688 "raid": { 00:31:46.688 "uuid": "ee208269-37e4-4fed-b055-d1ac3afcb8a1", 00:31:46.688 "strip_size_kb": 64, 00:31:46.688 "state": "online", 00:31:46.688 "raid_level": "raid5f", 00:31:46.688 "superblock": true, 00:31:46.688 "num_base_bdevs": 3, 00:31:46.688 "num_base_bdevs_discovered": 3, 00:31:46.688 "num_base_bdevs_operational": 3, 00:31:46.688 "base_bdevs_list": [ 00:31:46.688 { 00:31:46.688 "name": "NewBaseBdev", 00:31:46.688 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:46.688 "is_configured": true, 00:31:46.688 "data_offset": 2048, 00:31:46.688 "data_size": 63488 00:31:46.688 }, 00:31:46.688 { 00:31:46.688 "name": "BaseBdev2", 00:31:46.688 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:46.688 "is_configured": true, 00:31:46.688 "data_offset": 2048, 00:31:46.688 "data_size": 63488 00:31:46.688 }, 00:31:46.688 { 00:31:46.688 "name": "BaseBdev3", 00:31:46.688 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:46.688 "is_configured": true, 00:31:46.688 "data_offset": 2048, 00:31:46.688 "data_size": 63488 00:31:46.688 } 00:31:46.688 ] 00:31:46.688 } 00:31:46.688 } 00:31:46.688 }' 00:31:46.688 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:46.688 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:31:46.688 BaseBdev2 00:31:46.688 BaseBdev3' 00:31:46.688 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:46.688 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:46.688 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:46.948 "name": "NewBaseBdev", 00:31:46.948 "aliases": [ 00:31:46.948 "16c61c1e-42ca-44b9-9d23-9342fb139ab7" 00:31:46.948 ], 00:31:46.948 "product_name": "Malloc disk", 00:31:46.948 "block_size": 512, 00:31:46.948 "num_blocks": 65536, 00:31:46.948 "uuid": "16c61c1e-42ca-44b9-9d23-9342fb139ab7", 00:31:46.948 "assigned_rate_limits": { 00:31:46.948 "rw_ios_per_sec": 0, 00:31:46.948 "rw_mbytes_per_sec": 0, 00:31:46.948 "r_mbytes_per_sec": 0, 00:31:46.948 "w_mbytes_per_sec": 0 00:31:46.948 }, 00:31:46.948 "claimed": true, 00:31:46.948 "claim_type": "exclusive_write", 00:31:46.948 "zoned": false, 00:31:46.948 "supported_io_types": { 00:31:46.948 "read": true, 00:31:46.948 "write": true, 00:31:46.948 "unmap": true, 00:31:46.948 "flush": true, 00:31:46.948 "reset": true, 00:31:46.948 "nvme_admin": false, 00:31:46.948 "nvme_io": false, 00:31:46.948 "nvme_io_md": false, 00:31:46.948 "write_zeroes": true, 00:31:46.948 "zcopy": true, 00:31:46.948 "get_zone_info": false, 00:31:46.948 "zone_management": false, 00:31:46.948 "zone_append": false, 00:31:46.948 "compare": false, 00:31:46.948 "compare_and_write": false, 00:31:46.948 "abort": true, 00:31:46.948 "seek_hole": false, 00:31:46.948 "seek_data": false, 00:31:46.948 "copy": true, 00:31:46.948 "nvme_iov_md": false 00:31:46.948 }, 00:31:46.948 "memory_domains": [ 00:31:46.948 { 00:31:46.948 "dma_device_id": "system", 00:31:46.948 "dma_device_type": 1 00:31:46.948 }, 00:31:46.948 { 00:31:46.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.948 "dma_device_type": 2 00:31:46.948 } 00:31:46.948 ], 00:31:46.948 "driver_specific": {} 00:31:46.948 }' 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:46.948 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:47.209 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:47.209 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.209 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.209 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:47.209 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:47.209 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:47.209 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:47.468 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:47.468 "name": "BaseBdev2", 00:31:47.468 "aliases": [ 00:31:47.468 "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb" 00:31:47.468 ], 00:31:47.468 "product_name": "Malloc disk", 00:31:47.468 "block_size": 512, 00:31:47.468 "num_blocks": 65536, 00:31:47.468 "uuid": "1ec43704-3943-4a6f-91a1-16f7c1e6e7bb", 00:31:47.468 "assigned_rate_limits": { 00:31:47.468 "rw_ios_per_sec": 0, 00:31:47.468 "rw_mbytes_per_sec": 0, 00:31:47.468 "r_mbytes_per_sec": 0, 00:31:47.468 "w_mbytes_per_sec": 0 00:31:47.468 }, 00:31:47.468 "claimed": true, 00:31:47.468 "claim_type": "exclusive_write", 00:31:47.468 "zoned": false, 00:31:47.468 "supported_io_types": { 00:31:47.468 "read": true, 00:31:47.468 "write": true, 00:31:47.468 "unmap": true, 00:31:47.468 "flush": true, 00:31:47.468 "reset": true, 00:31:47.468 "nvme_admin": false, 00:31:47.468 "nvme_io": false, 00:31:47.468 "nvme_io_md": false, 00:31:47.468 "write_zeroes": true, 00:31:47.468 "zcopy": true, 00:31:47.468 "get_zone_info": false, 00:31:47.468 "zone_management": false, 00:31:47.468 "zone_append": false, 00:31:47.468 "compare": false, 00:31:47.468 "compare_and_write": false, 00:31:47.468 "abort": true, 00:31:47.468 "seek_hole": false, 00:31:47.468 "seek_data": false, 00:31:47.468 "copy": true, 00:31:47.468 "nvme_iov_md": false 00:31:47.468 }, 00:31:47.468 "memory_domains": [ 00:31:47.468 { 00:31:47.468 "dma_device_id": "system", 00:31:47.468 "dma_device_type": 1 00:31:47.468 }, 00:31:47.468 { 00:31:47.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.468 "dma_device_type": 2 00:31:47.468 } 00:31:47.468 ], 00:31:47.468 "driver_specific": {} 00:31:47.468 }' 00:31:47.468 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.468 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.468 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:47.468 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:47.468 18:58:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:47.468 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:47.468 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:47.728 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:47.987 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:47.987 "name": "BaseBdev3", 00:31:47.987 "aliases": [ 00:31:47.987 "f340783e-f2ce-4ef7-8504-035b8525eb32" 00:31:47.987 ], 00:31:47.987 "product_name": "Malloc disk", 00:31:47.987 "block_size": 512, 00:31:47.987 "num_blocks": 65536, 00:31:47.987 "uuid": "f340783e-f2ce-4ef7-8504-035b8525eb32", 00:31:47.987 "assigned_rate_limits": { 00:31:47.987 "rw_ios_per_sec": 0, 00:31:47.987 "rw_mbytes_per_sec": 0, 00:31:47.987 "r_mbytes_per_sec": 0, 00:31:47.987 "w_mbytes_per_sec": 0 00:31:47.987 }, 00:31:47.987 "claimed": true, 00:31:47.987 "claim_type": "exclusive_write", 00:31:47.987 "zoned": false, 00:31:47.987 "supported_io_types": { 00:31:47.987 "read": true, 00:31:47.987 "write": true, 00:31:47.987 "unmap": true, 00:31:47.987 "flush": true, 00:31:47.987 "reset": true, 00:31:47.987 "nvme_admin": false, 00:31:47.987 "nvme_io": false, 00:31:47.987 "nvme_io_md": false, 00:31:47.987 "write_zeroes": true, 00:31:47.987 "zcopy": true, 00:31:47.987 "get_zone_info": false, 00:31:47.987 "zone_management": false, 00:31:47.987 "zone_append": false, 00:31:47.987 "compare": false, 00:31:47.987 "compare_and_write": false, 00:31:47.987 "abort": true, 00:31:47.987 "seek_hole": false, 00:31:47.987 "seek_data": false, 00:31:47.987 "copy": true, 00:31:47.987 "nvme_iov_md": false 00:31:47.987 }, 00:31:47.987 "memory_domains": [ 00:31:47.987 { 00:31:47.987 "dma_device_id": "system", 00:31:47.987 "dma_device_type": 1 00:31:47.987 }, 00:31:47.987 { 00:31:47.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.987 "dma_device_type": 2 00:31:47.987 } 00:31:47.987 ], 00:31:47.987 "driver_specific": {} 00:31:47.987 }' 00:31:47.987 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.987 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.987 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:47.987 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:48.246 18:58:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:48.506 [2024-07-25 18:58:49.003754] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:48.506 [2024-07-25 18:58:49.003788] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:48.506 [2024-07-25 18:58:49.003864] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:48.506 [2024-07-25 18:58:49.004153] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:48.506 [2024-07-25 18:58:49.004167] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 150106 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 150106 ']' 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 150106 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 150106 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 150106' 00:31:48.506 killing process with pid 150106 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 150106 00:31:48.506 [2024-07-25 18:58:49.048348] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:48.506 18:58:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 150106 00:31:48.765 [2024-07-25 18:58:49.298830] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:50.144 18:58:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:31:50.144 00:31:50.144 real 0m27.617s 00:31:50.144 user 0m49.301s 00:31:50.144 sys 0m4.731s 00:31:50.144 18:58:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:50.144 18:58:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.144 ************************************ 00:31:50.144 END TEST raid5f_state_function_test_sb 00:31:50.144 ************************************ 00:31:50.144 18:58:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:31:50.144 18:58:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:50.144 18:58:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:50.144 18:58:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:50.144 ************************************ 00:31:50.144 START TEST raid5f_superblock_test 00:31:50.144 ************************************ 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=151058 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 151058 /var/tmp/spdk-raid.sock 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 151058 ']' 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:50.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:50.144 18:58:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.144 [2024-07-25 18:58:50.641557] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:50.144 [2024-07-25 18:58:50.641786] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151058 ] 00:31:50.403 [2024-07-25 18:58:50.813152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.662 [2024-07-25 18:58:51.013459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.662 [2024-07-25 18:58:51.200707] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:31:51.230 malloc1 00:31:51.230 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:51.489 [2024-07-25 18:58:51.939316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:51.489 [2024-07-25 18:58:51.939410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:51.489 [2024-07-25 18:58:51.939443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:51.489 [2024-07-25 18:58:51.939463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:51.489 [2024-07-25 18:58:51.941991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:51.489 [2024-07-25 18:58:51.942036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:51.489 pt1 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:51.489 18:58:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:31:51.748 malloc2 00:31:51.748 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:51.748 [2024-07-25 18:58:52.322017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:51.748 [2024-07-25 18:58:52.322112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:51.748 [2024-07-25 18:58:52.322162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:51.748 [2024-07-25 18:58:52.322184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:51.748 [2024-07-25 18:58:52.324651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:51.748 [2024-07-25 18:58:52.324697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:51.748 pt2 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:31:52.006 malloc3 00:31:52.006 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:52.265 [2024-07-25 18:58:52.704834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:52.265 [2024-07-25 18:58:52.704936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:52.265 [2024-07-25 18:58:52.704974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:52.265 [2024-07-25 18:58:52.705000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:52.265 [2024-07-25 18:58:52.707631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:52.265 [2024-07-25 18:58:52.707688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:52.265 pt3 00:31:52.265 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:52.265 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:52.265 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:31:52.523 [2024-07-25 18:58:52.880915] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:52.523 [2024-07-25 18:58:52.882991] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:52.523 [2024-07-25 18:58:52.883058] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:52.523 [2024-07-25 18:58:52.883213] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:31:52.523 [2024-07-25 18:58:52.883222] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:52.523 [2024-07-25 18:58:52.883339] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:52.523 [2024-07-25 18:58:52.887453] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:31:52.523 [2024-07-25 18:58:52.887475] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:31:52.523 [2024-07-25 18:58:52.887657] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.524 18:58:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.524 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.524 "name": "raid_bdev1", 00:31:52.524 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:31:52.524 "strip_size_kb": 64, 00:31:52.524 "state": "online", 00:31:52.524 "raid_level": "raid5f", 00:31:52.524 "superblock": true, 00:31:52.524 "num_base_bdevs": 3, 00:31:52.524 "num_base_bdevs_discovered": 3, 00:31:52.524 "num_base_bdevs_operational": 3, 00:31:52.524 "base_bdevs_list": [ 00:31:52.524 { 00:31:52.524 "name": "pt1", 00:31:52.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:52.524 "is_configured": true, 00:31:52.524 "data_offset": 2048, 00:31:52.524 "data_size": 63488 00:31:52.524 }, 00:31:52.524 { 00:31:52.524 "name": "pt2", 00:31:52.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:52.524 "is_configured": true, 00:31:52.524 "data_offset": 2048, 00:31:52.524 "data_size": 63488 00:31:52.524 }, 00:31:52.524 { 00:31:52.524 "name": "pt3", 00:31:52.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:52.524 "is_configured": true, 00:31:52.524 "data_offset": 2048, 00:31:52.524 "data_size": 63488 00:31:52.524 } 00:31:52.524 ] 00:31:52.524 }' 00:31:52.524 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.524 18:58:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:53.092 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:53.350 [2024-07-25 18:58:53.834225] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:53.350 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:53.350 "name": "raid_bdev1", 00:31:53.350 "aliases": [ 00:31:53.350 "99f013f9-feb5-4aae-978e-c54b7a24e0c7" 00:31:53.350 ], 00:31:53.350 "product_name": "Raid Volume", 00:31:53.350 "block_size": 512, 00:31:53.350 "num_blocks": 126976, 00:31:53.350 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:31:53.350 "assigned_rate_limits": { 00:31:53.350 "rw_ios_per_sec": 0, 00:31:53.350 "rw_mbytes_per_sec": 0, 00:31:53.350 "r_mbytes_per_sec": 0, 00:31:53.350 "w_mbytes_per_sec": 0 00:31:53.350 }, 00:31:53.350 "claimed": false, 00:31:53.350 "zoned": false, 00:31:53.350 "supported_io_types": { 00:31:53.350 "read": true, 00:31:53.350 "write": true, 00:31:53.350 "unmap": false, 00:31:53.350 "flush": false, 00:31:53.350 "reset": true, 00:31:53.350 "nvme_admin": false, 00:31:53.350 "nvme_io": false, 00:31:53.350 "nvme_io_md": false, 00:31:53.350 "write_zeroes": true, 00:31:53.350 "zcopy": false, 00:31:53.350 "get_zone_info": false, 00:31:53.350 "zone_management": false, 00:31:53.350 "zone_append": false, 00:31:53.350 "compare": false, 00:31:53.350 "compare_and_write": false, 00:31:53.350 "abort": false, 00:31:53.350 "seek_hole": false, 00:31:53.350 "seek_data": false, 00:31:53.350 "copy": false, 00:31:53.350 "nvme_iov_md": false 00:31:53.350 }, 00:31:53.350 "driver_specific": { 00:31:53.350 "raid": { 00:31:53.350 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:31:53.350 "strip_size_kb": 64, 00:31:53.350 "state": "online", 00:31:53.350 "raid_level": "raid5f", 00:31:53.350 "superblock": true, 00:31:53.350 "num_base_bdevs": 3, 00:31:53.350 "num_base_bdevs_discovered": 3, 00:31:53.350 "num_base_bdevs_operational": 3, 00:31:53.350 "base_bdevs_list": [ 00:31:53.350 { 00:31:53.350 "name": "pt1", 00:31:53.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:53.350 "is_configured": true, 00:31:53.350 "data_offset": 2048, 00:31:53.350 "data_size": 63488 00:31:53.350 }, 00:31:53.350 { 00:31:53.350 "name": "pt2", 00:31:53.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:53.350 "is_configured": true, 00:31:53.350 "data_offset": 2048, 00:31:53.350 "data_size": 63488 00:31:53.350 }, 00:31:53.350 { 00:31:53.350 "name": "pt3", 00:31:53.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:53.350 "is_configured": true, 00:31:53.350 "data_offset": 2048, 00:31:53.350 "data_size": 63488 00:31:53.350 } 00:31:53.350 ] 00:31:53.350 } 00:31:53.350 } 00:31:53.350 }' 00:31:53.350 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:53.350 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:53.350 pt2 00:31:53.350 pt3' 00:31:53.350 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:53.350 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:53.350 18:58:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:53.608 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:53.608 "name": "pt1", 00:31:53.608 "aliases": [ 00:31:53.608 "00000000-0000-0000-0000-000000000001" 00:31:53.608 ], 00:31:53.608 "product_name": "passthru", 00:31:53.608 "block_size": 512, 00:31:53.608 "num_blocks": 65536, 00:31:53.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:53.608 "assigned_rate_limits": { 00:31:53.608 "rw_ios_per_sec": 0, 00:31:53.608 "rw_mbytes_per_sec": 0, 00:31:53.608 "r_mbytes_per_sec": 0, 00:31:53.608 "w_mbytes_per_sec": 0 00:31:53.608 }, 00:31:53.608 "claimed": true, 00:31:53.608 "claim_type": "exclusive_write", 00:31:53.608 "zoned": false, 00:31:53.608 "supported_io_types": { 00:31:53.608 "read": true, 00:31:53.608 "write": true, 00:31:53.608 "unmap": true, 00:31:53.608 "flush": true, 00:31:53.608 "reset": true, 00:31:53.608 "nvme_admin": false, 00:31:53.608 "nvme_io": false, 00:31:53.608 "nvme_io_md": false, 00:31:53.608 "write_zeroes": true, 00:31:53.608 "zcopy": true, 00:31:53.608 "get_zone_info": false, 00:31:53.608 "zone_management": false, 00:31:53.608 "zone_append": false, 00:31:53.608 "compare": false, 00:31:53.608 "compare_and_write": false, 00:31:53.608 "abort": true, 00:31:53.608 "seek_hole": false, 00:31:53.608 "seek_data": false, 00:31:53.608 "copy": true, 00:31:53.608 "nvme_iov_md": false 00:31:53.608 }, 00:31:53.608 "memory_domains": [ 00:31:53.608 { 00:31:53.608 "dma_device_id": "system", 00:31:53.608 "dma_device_type": 1 00:31:53.608 }, 00:31:53.608 { 00:31:53.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.608 "dma_device_type": 2 00:31:53.608 } 00:31:53.608 ], 00:31:53.608 "driver_specific": { 00:31:53.608 "passthru": { 00:31:53.608 "name": "pt1", 00:31:53.608 "base_bdev_name": "malloc1" 00:31:53.608 } 00:31:53.608 } 00:31:53.608 }' 00:31:53.608 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.608 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.608 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:53.608 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.608 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:53.866 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:54.125 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:54.125 "name": "pt2", 00:31:54.125 "aliases": [ 00:31:54.125 "00000000-0000-0000-0000-000000000002" 00:31:54.125 ], 00:31:54.125 "product_name": "passthru", 00:31:54.125 "block_size": 512, 00:31:54.125 "num_blocks": 65536, 00:31:54.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:54.125 "assigned_rate_limits": { 00:31:54.125 "rw_ios_per_sec": 0, 00:31:54.125 "rw_mbytes_per_sec": 0, 00:31:54.125 "r_mbytes_per_sec": 0, 00:31:54.125 "w_mbytes_per_sec": 0 00:31:54.125 }, 00:31:54.125 "claimed": true, 00:31:54.125 "claim_type": "exclusive_write", 00:31:54.125 "zoned": false, 00:31:54.125 "supported_io_types": { 00:31:54.125 "read": true, 00:31:54.125 "write": true, 00:31:54.125 "unmap": true, 00:31:54.125 "flush": true, 00:31:54.125 "reset": true, 00:31:54.125 "nvme_admin": false, 00:31:54.125 "nvme_io": false, 00:31:54.125 "nvme_io_md": false, 00:31:54.125 "write_zeroes": true, 00:31:54.125 "zcopy": true, 00:31:54.125 "get_zone_info": false, 00:31:54.125 "zone_management": false, 00:31:54.125 "zone_append": false, 00:31:54.125 "compare": false, 00:31:54.125 "compare_and_write": false, 00:31:54.125 "abort": true, 00:31:54.125 "seek_hole": false, 00:31:54.125 "seek_data": false, 00:31:54.125 "copy": true, 00:31:54.125 "nvme_iov_md": false 00:31:54.125 }, 00:31:54.125 "memory_domains": [ 00:31:54.125 { 00:31:54.125 "dma_device_id": "system", 00:31:54.125 "dma_device_type": 1 00:31:54.125 }, 00:31:54.125 { 00:31:54.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.125 "dma_device_type": 2 00:31:54.125 } 00:31:54.125 ], 00:31:54.125 "driver_specific": { 00:31:54.125 "passthru": { 00:31:54.125 "name": "pt2", 00:31:54.125 "base_bdev_name": "malloc2" 00:31:54.125 } 00:31:54.125 } 00:31:54.125 }' 00:31:54.125 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.125 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.125 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:54.125 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:54.383 18:58:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:54.642 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:54.642 "name": "pt3", 00:31:54.642 "aliases": [ 00:31:54.642 "00000000-0000-0000-0000-000000000003" 00:31:54.642 ], 00:31:54.642 "product_name": "passthru", 00:31:54.642 "block_size": 512, 00:31:54.642 "num_blocks": 65536, 00:31:54.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:54.642 "assigned_rate_limits": { 00:31:54.642 "rw_ios_per_sec": 0, 00:31:54.642 "rw_mbytes_per_sec": 0, 00:31:54.642 "r_mbytes_per_sec": 0, 00:31:54.642 "w_mbytes_per_sec": 0 00:31:54.642 }, 00:31:54.642 "claimed": true, 00:31:54.642 "claim_type": "exclusive_write", 00:31:54.642 "zoned": false, 00:31:54.642 "supported_io_types": { 00:31:54.642 "read": true, 00:31:54.642 "write": true, 00:31:54.642 "unmap": true, 00:31:54.642 "flush": true, 00:31:54.642 "reset": true, 00:31:54.642 "nvme_admin": false, 00:31:54.642 "nvme_io": false, 00:31:54.642 "nvme_io_md": false, 00:31:54.642 "write_zeroes": true, 00:31:54.642 "zcopy": true, 00:31:54.642 "get_zone_info": false, 00:31:54.642 "zone_management": false, 00:31:54.642 "zone_append": false, 00:31:54.642 "compare": false, 00:31:54.642 "compare_and_write": false, 00:31:54.642 "abort": true, 00:31:54.642 "seek_hole": false, 00:31:54.642 "seek_data": false, 00:31:54.642 "copy": true, 00:31:54.642 "nvme_iov_md": false 00:31:54.642 }, 00:31:54.642 "memory_domains": [ 00:31:54.642 { 00:31:54.642 "dma_device_id": "system", 00:31:54.642 "dma_device_type": 1 00:31:54.642 }, 00:31:54.642 { 00:31:54.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.642 "dma_device_type": 2 00:31:54.642 } 00:31:54.642 ], 00:31:54.642 "driver_specific": { 00:31:54.642 "passthru": { 00:31:54.642 "name": "pt3", 00:31:54.642 "base_bdev_name": "malloc3" 00:31:54.642 } 00:31:54.642 } 00:31:54.642 }' 00:31:54.642 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.642 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.642 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:54.642 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:54.901 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:31:55.160 [2024-07-25 18:58:55.714838] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:55.160 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=99f013f9-feb5-4aae-978e-c54b7a24e0c7 00:31:55.160 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 99f013f9-feb5-4aae-978e-c54b7a24e0c7 ']' 00:31:55.160 18:58:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:55.419 [2024-07-25 18:58:55.990750] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:55.419 [2024-07-25 18:58:55.990772] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:55.420 [2024-07-25 18:58:55.990864] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:55.420 [2024-07-25 18:58:55.990949] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:55.420 [2024-07-25 18:58:55.990958] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:31:55.689 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.689 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:31:55.689 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:31:55.689 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:31:55.689 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:55.689 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:55.964 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:55.964 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:56.223 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:56.223 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:56.223 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:56.223 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.481 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.481 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:56.481 18:58:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.481 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:56.481 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:56.481 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:56.739 [2024-07-25 18:58:57.166997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:56.739 [2024-07-25 18:58:57.169232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:56.739 [2024-07-25 18:58:57.169296] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:56.739 [2024-07-25 18:58:57.169348] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:56.739 [2024-07-25 18:58:57.169440] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:56.739 [2024-07-25 18:58:57.169489] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:56.739 [2024-07-25 18:58:57.169518] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:56.739 [2024-07-25 18:58:57.169527] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:31:56.739 request: 00:31:56.739 { 00:31:56.739 "name": "raid_bdev1", 00:31:56.739 "raid_level": "raid5f", 00:31:56.739 "base_bdevs": [ 00:31:56.739 "malloc1", 00:31:56.739 "malloc2", 00:31:56.739 "malloc3" 00:31:56.739 ], 00:31:56.739 "strip_size_kb": 64, 00:31:56.739 "superblock": false, 00:31:56.739 "method": "bdev_raid_create", 00:31:56.739 "req_id": 1 00:31:56.739 } 00:31:56.739 Got JSON-RPC error response 00:31:56.739 response: 00:31:56.739 { 00:31:56.739 "code": -17, 00:31:56.739 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:56.739 } 00:31:56.739 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:31:56.739 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:56.739 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:56.739 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:56.739 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.739 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:31:56.997 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:31:56.997 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:31:56.997 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:57.256 [2024-07-25 18:58:57.578983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:57.256 [2024-07-25 18:58:57.579069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.256 [2024-07-25 18:58:57.579108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:57.256 [2024-07-25 18:58:57.579130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.256 [2024-07-25 18:58:57.581740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.256 [2024-07-25 18:58:57.581819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:57.256 [2024-07-25 18:58:57.581945] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:57.256 [2024-07-25 18:58:57.581990] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:57.256 pt1 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:57.256 "name": "raid_bdev1", 00:31:57.256 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:31:57.256 "strip_size_kb": 64, 00:31:57.256 "state": "configuring", 00:31:57.256 "raid_level": "raid5f", 00:31:57.256 "superblock": true, 00:31:57.256 "num_base_bdevs": 3, 00:31:57.256 "num_base_bdevs_discovered": 1, 00:31:57.256 "num_base_bdevs_operational": 3, 00:31:57.256 "base_bdevs_list": [ 00:31:57.256 { 00:31:57.256 "name": "pt1", 00:31:57.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:57.256 "is_configured": true, 00:31:57.256 "data_offset": 2048, 00:31:57.256 "data_size": 63488 00:31:57.256 }, 00:31:57.256 { 00:31:57.256 "name": null, 00:31:57.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:57.256 "is_configured": false, 00:31:57.256 "data_offset": 2048, 00:31:57.256 "data_size": 63488 00:31:57.256 }, 00:31:57.256 { 00:31:57.256 "name": null, 00:31:57.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:57.256 "is_configured": false, 00:31:57.256 "data_offset": 2048, 00:31:57.256 "data_size": 63488 00:31:57.256 } 00:31:57.256 ] 00:31:57.256 }' 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:57.256 18:58:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.823 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:31:57.823 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:58.082 [2024-07-25 18:58:58.511279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:58.082 [2024-07-25 18:58:58.511370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.082 [2024-07-25 18:58:58.511416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:58.082 [2024-07-25 18:58:58.511438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.082 [2024-07-25 18:58:58.511998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.083 [2024-07-25 18:58:58.512042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:58.083 [2024-07-25 18:58:58.512169] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:58.083 [2024-07-25 18:58:58.512201] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:58.083 pt2 00:31:58.083 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:58.341 [2024-07-25 18:58:58.691336] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.341 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.599 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.599 "name": "raid_bdev1", 00:31:58.600 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:31:58.600 "strip_size_kb": 64, 00:31:58.600 "state": "configuring", 00:31:58.600 "raid_level": "raid5f", 00:31:58.600 "superblock": true, 00:31:58.600 "num_base_bdevs": 3, 00:31:58.600 "num_base_bdevs_discovered": 1, 00:31:58.600 "num_base_bdevs_operational": 3, 00:31:58.600 "base_bdevs_list": [ 00:31:58.600 { 00:31:58.600 "name": "pt1", 00:31:58.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:58.600 "is_configured": true, 00:31:58.600 "data_offset": 2048, 00:31:58.600 "data_size": 63488 00:31:58.600 }, 00:31:58.600 { 00:31:58.600 "name": null, 00:31:58.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.600 "is_configured": false, 00:31:58.600 "data_offset": 2048, 00:31:58.600 "data_size": 63488 00:31:58.600 }, 00:31:58.600 { 00:31:58.600 "name": null, 00:31:58.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:58.600 "is_configured": false, 00:31:58.600 "data_offset": 2048, 00:31:58.600 "data_size": 63488 00:31:58.600 } 00:31:58.600 ] 00:31:58.600 }' 00:31:58.600 18:58:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.600 18:58:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.166 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:31:59.166 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:59.166 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:59.166 [2024-07-25 18:58:59.655468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:59.166 [2024-07-25 18:58:59.655586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.166 [2024-07-25 18:58:59.655621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:31:59.166 [2024-07-25 18:58:59.655651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.166 [2024-07-25 18:58:59.656203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.166 [2024-07-25 18:58:59.656248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:59.166 [2024-07-25 18:58:59.656366] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:59.166 [2024-07-25 18:58:59.656390] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:59.166 pt2 00:31:59.166 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:31:59.166 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:59.166 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:59.423 [2024-07-25 18:58:59.831507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:59.423 [2024-07-25 18:58:59.831570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.423 [2024-07-25 18:58:59.831616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:59.423 [2024-07-25 18:58:59.831646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.423 [2024-07-25 18:58:59.832106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.423 [2024-07-25 18:58:59.832144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:59.423 [2024-07-25 18:58:59.832232] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:59.423 [2024-07-25 18:58:59.832250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:59.423 [2024-07-25 18:58:59.832366] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:31:59.423 [2024-07-25 18:58:59.832374] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:59.423 [2024-07-25 18:58:59.832448] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:59.423 [2024-07-25 18:58:59.836317] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:31:59.423 [2024-07-25 18:58:59.836353] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:31:59.423 [2024-07-25 18:58:59.836532] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.423 pt3 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.423 18:58:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.682 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.682 "name": "raid_bdev1", 00:31:59.682 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:31:59.682 "strip_size_kb": 64, 00:31:59.682 "state": "online", 00:31:59.682 "raid_level": "raid5f", 00:31:59.682 "superblock": true, 00:31:59.682 "num_base_bdevs": 3, 00:31:59.682 "num_base_bdevs_discovered": 3, 00:31:59.682 "num_base_bdevs_operational": 3, 00:31:59.682 "base_bdevs_list": [ 00:31:59.682 { 00:31:59.682 "name": "pt1", 00:31:59.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:59.682 "is_configured": true, 00:31:59.682 "data_offset": 2048, 00:31:59.682 "data_size": 63488 00:31:59.682 }, 00:31:59.682 { 00:31:59.682 "name": "pt2", 00:31:59.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:59.682 "is_configured": true, 00:31:59.682 "data_offset": 2048, 00:31:59.682 "data_size": 63488 00:31:59.682 }, 00:31:59.682 { 00:31:59.682 "name": "pt3", 00:31:59.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:59.682 "is_configured": true, 00:31:59.682 "data_offset": 2048, 00:31:59.682 "data_size": 63488 00:31:59.682 } 00:31:59.682 ] 00:31:59.682 }' 00:31:59.682 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.682 18:59:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:00.249 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:00.508 [2024-07-25 18:59:00.863137] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:00.508 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:00.508 "name": "raid_bdev1", 00:32:00.508 "aliases": [ 00:32:00.508 "99f013f9-feb5-4aae-978e-c54b7a24e0c7" 00:32:00.508 ], 00:32:00.508 "product_name": "Raid Volume", 00:32:00.508 "block_size": 512, 00:32:00.508 "num_blocks": 126976, 00:32:00.508 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:32:00.508 "assigned_rate_limits": { 00:32:00.508 "rw_ios_per_sec": 0, 00:32:00.508 "rw_mbytes_per_sec": 0, 00:32:00.508 "r_mbytes_per_sec": 0, 00:32:00.508 "w_mbytes_per_sec": 0 00:32:00.508 }, 00:32:00.508 "claimed": false, 00:32:00.508 "zoned": false, 00:32:00.508 "supported_io_types": { 00:32:00.508 "read": true, 00:32:00.508 "write": true, 00:32:00.508 "unmap": false, 00:32:00.508 "flush": false, 00:32:00.508 "reset": true, 00:32:00.508 "nvme_admin": false, 00:32:00.508 "nvme_io": false, 00:32:00.508 "nvme_io_md": false, 00:32:00.508 "write_zeroes": true, 00:32:00.508 "zcopy": false, 00:32:00.508 "get_zone_info": false, 00:32:00.508 "zone_management": false, 00:32:00.508 "zone_append": false, 00:32:00.508 "compare": false, 00:32:00.508 "compare_and_write": false, 00:32:00.508 "abort": false, 00:32:00.508 "seek_hole": false, 00:32:00.508 "seek_data": false, 00:32:00.508 "copy": false, 00:32:00.508 "nvme_iov_md": false 00:32:00.508 }, 00:32:00.508 "driver_specific": { 00:32:00.508 "raid": { 00:32:00.508 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:32:00.508 "strip_size_kb": 64, 00:32:00.508 "state": "online", 00:32:00.508 "raid_level": "raid5f", 00:32:00.508 "superblock": true, 00:32:00.508 "num_base_bdevs": 3, 00:32:00.508 "num_base_bdevs_discovered": 3, 00:32:00.508 "num_base_bdevs_operational": 3, 00:32:00.508 "base_bdevs_list": [ 00:32:00.508 { 00:32:00.508 "name": "pt1", 00:32:00.508 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:00.508 "is_configured": true, 00:32:00.508 "data_offset": 2048, 00:32:00.508 "data_size": 63488 00:32:00.508 }, 00:32:00.508 { 00:32:00.508 "name": "pt2", 00:32:00.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:00.508 "is_configured": true, 00:32:00.508 "data_offset": 2048, 00:32:00.508 "data_size": 63488 00:32:00.508 }, 00:32:00.508 { 00:32:00.508 "name": "pt3", 00:32:00.508 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:00.508 "is_configured": true, 00:32:00.508 "data_offset": 2048, 00:32:00.508 "data_size": 63488 00:32:00.508 } 00:32:00.508 ] 00:32:00.508 } 00:32:00.508 } 00:32:00.508 }' 00:32:00.508 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:00.508 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:00.508 pt2 00:32:00.508 pt3' 00:32:00.508 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:00.508 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:00.508 18:59:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:00.767 "name": "pt1", 00:32:00.767 "aliases": [ 00:32:00.767 "00000000-0000-0000-0000-000000000001" 00:32:00.767 ], 00:32:00.767 "product_name": "passthru", 00:32:00.767 "block_size": 512, 00:32:00.767 "num_blocks": 65536, 00:32:00.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:00.767 "assigned_rate_limits": { 00:32:00.767 "rw_ios_per_sec": 0, 00:32:00.767 "rw_mbytes_per_sec": 0, 00:32:00.767 "r_mbytes_per_sec": 0, 00:32:00.767 "w_mbytes_per_sec": 0 00:32:00.767 }, 00:32:00.767 "claimed": true, 00:32:00.767 "claim_type": "exclusive_write", 00:32:00.767 "zoned": false, 00:32:00.767 "supported_io_types": { 00:32:00.767 "read": true, 00:32:00.767 "write": true, 00:32:00.767 "unmap": true, 00:32:00.767 "flush": true, 00:32:00.767 "reset": true, 00:32:00.767 "nvme_admin": false, 00:32:00.767 "nvme_io": false, 00:32:00.767 "nvme_io_md": false, 00:32:00.767 "write_zeroes": true, 00:32:00.767 "zcopy": true, 00:32:00.767 "get_zone_info": false, 00:32:00.767 "zone_management": false, 00:32:00.767 "zone_append": false, 00:32:00.767 "compare": false, 00:32:00.767 "compare_and_write": false, 00:32:00.767 "abort": true, 00:32:00.767 "seek_hole": false, 00:32:00.767 "seek_data": false, 00:32:00.767 "copy": true, 00:32:00.767 "nvme_iov_md": false 00:32:00.767 }, 00:32:00.767 "memory_domains": [ 00:32:00.767 { 00:32:00.767 "dma_device_id": "system", 00:32:00.767 "dma_device_type": 1 00:32:00.767 }, 00:32:00.767 { 00:32:00.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.767 "dma_device_type": 2 00:32:00.767 } 00:32:00.767 ], 00:32:00.767 "driver_specific": { 00:32:00.767 "passthru": { 00:32:00.767 "name": "pt1", 00:32:00.767 "base_bdev_name": "malloc1" 00:32:00.767 } 00:32:00.767 } 00:32:00.767 }' 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:00.767 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.026 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:01.026 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.026 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.026 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:01.026 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:01.026 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:01.026 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:01.285 "name": "pt2", 00:32:01.285 "aliases": [ 00:32:01.285 "00000000-0000-0000-0000-000000000002" 00:32:01.285 ], 00:32:01.285 "product_name": "passthru", 00:32:01.285 "block_size": 512, 00:32:01.285 "num_blocks": 65536, 00:32:01.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:01.285 "assigned_rate_limits": { 00:32:01.285 "rw_ios_per_sec": 0, 00:32:01.285 "rw_mbytes_per_sec": 0, 00:32:01.285 "r_mbytes_per_sec": 0, 00:32:01.285 "w_mbytes_per_sec": 0 00:32:01.285 }, 00:32:01.285 "claimed": true, 00:32:01.285 "claim_type": "exclusive_write", 00:32:01.285 "zoned": false, 00:32:01.285 "supported_io_types": { 00:32:01.285 "read": true, 00:32:01.285 "write": true, 00:32:01.285 "unmap": true, 00:32:01.285 "flush": true, 00:32:01.285 "reset": true, 00:32:01.285 "nvme_admin": false, 00:32:01.285 "nvme_io": false, 00:32:01.285 "nvme_io_md": false, 00:32:01.285 "write_zeroes": true, 00:32:01.285 "zcopy": true, 00:32:01.285 "get_zone_info": false, 00:32:01.285 "zone_management": false, 00:32:01.285 "zone_append": false, 00:32:01.285 "compare": false, 00:32:01.285 "compare_and_write": false, 00:32:01.285 "abort": true, 00:32:01.285 "seek_hole": false, 00:32:01.285 "seek_data": false, 00:32:01.285 "copy": true, 00:32:01.285 "nvme_iov_md": false 00:32:01.285 }, 00:32:01.285 "memory_domains": [ 00:32:01.285 { 00:32:01.285 "dma_device_id": "system", 00:32:01.285 "dma_device_type": 1 00:32:01.285 }, 00:32:01.285 { 00:32:01.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.285 "dma_device_type": 2 00:32:01.285 } 00:32:01.285 ], 00:32:01.285 "driver_specific": { 00:32:01.285 "passthru": { 00:32:01.285 "name": "pt2", 00:32:01.285 "base_bdev_name": "malloc2" 00:32:01.285 } 00:32:01.285 } 00:32:01.285 }' 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:01.285 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.544 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.544 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:01.544 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.544 18:59:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.544 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:01.544 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:01.544 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:32:01.544 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:01.803 "name": "pt3", 00:32:01.803 "aliases": [ 00:32:01.803 "00000000-0000-0000-0000-000000000003" 00:32:01.803 ], 00:32:01.803 "product_name": "passthru", 00:32:01.803 "block_size": 512, 00:32:01.803 "num_blocks": 65536, 00:32:01.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:01.803 "assigned_rate_limits": { 00:32:01.803 "rw_ios_per_sec": 0, 00:32:01.803 "rw_mbytes_per_sec": 0, 00:32:01.803 "r_mbytes_per_sec": 0, 00:32:01.803 "w_mbytes_per_sec": 0 00:32:01.803 }, 00:32:01.803 "claimed": true, 00:32:01.803 "claim_type": "exclusive_write", 00:32:01.803 "zoned": false, 00:32:01.803 "supported_io_types": { 00:32:01.803 "read": true, 00:32:01.803 "write": true, 00:32:01.803 "unmap": true, 00:32:01.803 "flush": true, 00:32:01.803 "reset": true, 00:32:01.803 "nvme_admin": false, 00:32:01.803 "nvme_io": false, 00:32:01.803 "nvme_io_md": false, 00:32:01.803 "write_zeroes": true, 00:32:01.803 "zcopy": true, 00:32:01.803 "get_zone_info": false, 00:32:01.803 "zone_management": false, 00:32:01.803 "zone_append": false, 00:32:01.803 "compare": false, 00:32:01.803 "compare_and_write": false, 00:32:01.803 "abort": true, 00:32:01.803 "seek_hole": false, 00:32:01.803 "seek_data": false, 00:32:01.803 "copy": true, 00:32:01.803 "nvme_iov_md": false 00:32:01.803 }, 00:32:01.803 "memory_domains": [ 00:32:01.803 { 00:32:01.803 "dma_device_id": "system", 00:32:01.803 "dma_device_type": 1 00:32:01.803 }, 00:32:01.803 { 00:32:01.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.803 "dma_device_type": 2 00:32:01.803 } 00:32:01.803 ], 00:32:01.803 "driver_specific": { 00:32:01.803 "passthru": { 00:32:01.803 "name": "pt3", 00:32:01.803 "base_bdev_name": "malloc3" 00:32:01.803 } 00:32:01.803 } 00:32:01.803 }' 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:01.803 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:02.062 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:02.062 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:02.062 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:02.062 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:02.062 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:02.062 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:32:02.062 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:02.321 [2024-07-25 18:59:02.695515] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.321 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 99f013f9-feb5-4aae-978e-c54b7a24e0c7 '!=' 99f013f9-feb5-4aae-978e-c54b7a24e0c7 ']' 00:32:02.321 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:32:02.321 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:02.321 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:32:02.321 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:02.581 [2024-07-25 18:59:02.955504] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.581 18:59:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.841 18:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.841 "name": "raid_bdev1", 00:32:02.841 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:32:02.841 "strip_size_kb": 64, 00:32:02.841 "state": "online", 00:32:02.841 "raid_level": "raid5f", 00:32:02.841 "superblock": true, 00:32:02.841 "num_base_bdevs": 3, 00:32:02.841 "num_base_bdevs_discovered": 2, 00:32:02.841 "num_base_bdevs_operational": 2, 00:32:02.841 "base_bdevs_list": [ 00:32:02.841 { 00:32:02.841 "name": null, 00:32:02.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.841 "is_configured": false, 00:32:02.841 "data_offset": 2048, 00:32:02.841 "data_size": 63488 00:32:02.841 }, 00:32:02.841 { 00:32:02.841 "name": "pt2", 00:32:02.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:02.841 "is_configured": true, 00:32:02.841 "data_offset": 2048, 00:32:02.841 "data_size": 63488 00:32:02.841 }, 00:32:02.841 { 00:32:02.841 "name": "pt3", 00:32:02.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:02.841 "is_configured": true, 00:32:02.841 "data_offset": 2048, 00:32:02.841 "data_size": 63488 00:32:02.841 } 00:32:02.841 ] 00:32:02.841 }' 00:32:02.841 18:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.841 18:59:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.408 18:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:03.408 [2024-07-25 18:59:03.899562] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:03.408 [2024-07-25 18:59:03.899687] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:03.408 [2024-07-25 18:59:03.899899] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:03.408 [2024-07-25 18:59:03.900001] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:03.408 [2024-07-25 18:59:03.900215] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:32:03.408 18:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.408 18:59:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:32:03.667 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:32:03.667 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:32:03.667 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:32:03.667 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:03.667 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:03.926 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:03.926 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:03.926 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:04.185 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:04.185 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:04.185 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:32:04.185 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:32:04.185 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:04.444 [2024-07-25 18:59:04.779719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:04.444 [2024-07-25 18:59:04.779965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:04.444 [2024-07-25 18:59:04.780038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:32:04.444 [2024-07-25 18:59:04.780145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:04.444 [2024-07-25 18:59:04.782766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:04.444 [2024-07-25 18:59:04.782926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:04.444 [2024-07-25 18:59:04.783130] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:04.444 [2024-07-25 18:59:04.783273] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:04.444 pt2 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.445 18:59:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.704 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:04.704 "name": "raid_bdev1", 00:32:04.704 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:32:04.704 "strip_size_kb": 64, 00:32:04.704 "state": "configuring", 00:32:04.704 "raid_level": "raid5f", 00:32:04.704 "superblock": true, 00:32:04.704 "num_base_bdevs": 3, 00:32:04.704 "num_base_bdevs_discovered": 1, 00:32:04.704 "num_base_bdevs_operational": 2, 00:32:04.704 "base_bdevs_list": [ 00:32:04.704 { 00:32:04.704 "name": null, 00:32:04.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.704 "is_configured": false, 00:32:04.704 "data_offset": 2048, 00:32:04.705 "data_size": 63488 00:32:04.705 }, 00:32:04.705 { 00:32:04.705 "name": "pt2", 00:32:04.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:04.705 "is_configured": true, 00:32:04.705 "data_offset": 2048, 00:32:04.705 "data_size": 63488 00:32:04.705 }, 00:32:04.705 { 00:32:04.705 "name": null, 00:32:04.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:04.705 "is_configured": false, 00:32:04.705 "data_offset": 2048, 00:32:04.705 "data_size": 63488 00:32:04.705 } 00:32:04.705 ] 00:32:04.705 }' 00:32:04.705 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:04.705 18:59:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.274 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:32:05.274 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:32:05.274 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:32:05.274 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:05.534 [2024-07-25 18:59:05.899924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:05.534 [2024-07-25 18:59:05.900147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.534 [2024-07-25 18:59:05.900231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:05.534 [2024-07-25 18:59:05.900334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.534 [2024-07-25 18:59:05.900873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.534 [2024-07-25 18:59:05.901035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:05.534 [2024-07-25 18:59:05.901225] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:05.534 [2024-07-25 18:59:05.901323] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:05.534 [2024-07-25 18:59:05.901504] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:32:05.534 [2024-07-25 18:59:05.901622] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:05.534 [2024-07-25 18:59:05.901758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:05.534 [2024-07-25 18:59:05.905822] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:32:05.534 [2024-07-25 18:59:05.905953] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:32:05.534 [2024-07-25 18:59:05.906394] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.534 pt3 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.534 18:59:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.534 18:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:05.534 "name": "raid_bdev1", 00:32:05.534 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:32:05.534 "strip_size_kb": 64, 00:32:05.534 "state": "online", 00:32:05.534 "raid_level": "raid5f", 00:32:05.534 "superblock": true, 00:32:05.534 "num_base_bdevs": 3, 00:32:05.534 "num_base_bdevs_discovered": 2, 00:32:05.534 "num_base_bdevs_operational": 2, 00:32:05.534 "base_bdevs_list": [ 00:32:05.534 { 00:32:05.534 "name": null, 00:32:05.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.534 "is_configured": false, 00:32:05.534 "data_offset": 2048, 00:32:05.534 "data_size": 63488 00:32:05.534 }, 00:32:05.534 { 00:32:05.534 "name": "pt2", 00:32:05.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:05.534 "is_configured": true, 00:32:05.534 "data_offset": 2048, 00:32:05.534 "data_size": 63488 00:32:05.534 }, 00:32:05.534 { 00:32:05.534 "name": "pt3", 00:32:05.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:05.534 "is_configured": true, 00:32:05.534 "data_offset": 2048, 00:32:05.534 "data_size": 63488 00:32:05.534 } 00:32:05.534 ] 00:32:05.534 }' 00:32:05.534 18:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:05.534 18:59:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:06.471 18:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:06.471 [2024-07-25 18:59:06.926264] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:06.471 [2024-07-25 18:59:06.926462] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:06.471 [2024-07-25 18:59:06.926616] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:06.471 [2024-07-25 18:59:06.926767] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:06.471 [2024-07-25 18:59:06.926861] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:32:06.471 18:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:32:06.471 18:59:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.731 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:32:06.731 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:32:06.731 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:32:06.731 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:32:06.731 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:06.731 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:06.990 [2024-07-25 18:59:07.442375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:06.991 [2024-07-25 18:59:07.442628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:06.991 [2024-07-25 18:59:07.442713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:06.991 [2024-07-25 18:59:07.442805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:06.991 [2024-07-25 18:59:07.445573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:06.991 [2024-07-25 18:59:07.445758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:06.991 [2024-07-25 18:59:07.445998] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:06.991 [2024-07-25 18:59:07.446140] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:06.991 [2024-07-25 18:59:07.446337] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:06.991 [2024-07-25 18:59:07.446443] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:06.991 [2024-07-25 18:59:07.446490] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:32:06.991 [2024-07-25 18:59:07.446573] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:06.991 pt1 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.991 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.251 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:07.251 "name": "raid_bdev1", 00:32:07.251 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:32:07.251 "strip_size_kb": 64, 00:32:07.251 "state": "configuring", 00:32:07.251 "raid_level": "raid5f", 00:32:07.251 "superblock": true, 00:32:07.251 "num_base_bdevs": 3, 00:32:07.251 "num_base_bdevs_discovered": 1, 00:32:07.251 "num_base_bdevs_operational": 2, 00:32:07.251 "base_bdevs_list": [ 00:32:07.251 { 00:32:07.251 "name": null, 00:32:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.251 "is_configured": false, 00:32:07.251 "data_offset": 2048, 00:32:07.251 "data_size": 63488 00:32:07.251 }, 00:32:07.251 { 00:32:07.251 "name": "pt2", 00:32:07.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:07.251 "is_configured": true, 00:32:07.251 "data_offset": 2048, 00:32:07.251 "data_size": 63488 00:32:07.251 }, 00:32:07.251 { 00:32:07.251 "name": null, 00:32:07.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:07.251 "is_configured": false, 00:32:07.251 "data_offset": 2048, 00:32:07.251 "data_size": 63488 00:32:07.251 } 00:32:07.251 ] 00:32:07.251 }' 00:32:07.251 18:59:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:07.251 18:59:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.820 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:32:07.820 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:07.820 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:32:07.820 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:08.080 [2024-07-25 18:59:08.558211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:08.080 [2024-07-25 18:59:08.558456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:08.080 [2024-07-25 18:59:08.558526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:08.080 [2024-07-25 18:59:08.558635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:08.080 [2024-07-25 18:59:08.559155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:08.080 [2024-07-25 18:59:08.559297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:08.080 [2024-07-25 18:59:08.559488] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:08.080 [2024-07-25 18:59:08.559539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:08.080 [2024-07-25 18:59:08.559757] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:32:08.080 [2024-07-25 18:59:08.559852] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:08.080 [2024-07-25 18:59:08.559967] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:32:08.080 [2024-07-25 18:59:08.563850] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:32:08.080 [2024-07-25 18:59:08.563963] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:32:08.080 [2024-07-25 18:59:08.564335] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.080 pt3 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.080 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.340 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:08.340 "name": "raid_bdev1", 00:32:08.340 "uuid": "99f013f9-feb5-4aae-978e-c54b7a24e0c7", 00:32:08.340 "strip_size_kb": 64, 00:32:08.340 "state": "online", 00:32:08.340 "raid_level": "raid5f", 00:32:08.340 "superblock": true, 00:32:08.340 "num_base_bdevs": 3, 00:32:08.340 "num_base_bdevs_discovered": 2, 00:32:08.340 "num_base_bdevs_operational": 2, 00:32:08.340 "base_bdevs_list": [ 00:32:08.340 { 00:32:08.340 "name": null, 00:32:08.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.340 "is_configured": false, 00:32:08.340 "data_offset": 2048, 00:32:08.340 "data_size": 63488 00:32:08.340 }, 00:32:08.340 { 00:32:08.340 "name": "pt2", 00:32:08.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:08.340 "is_configured": true, 00:32:08.340 "data_offset": 2048, 00:32:08.340 "data_size": 63488 00:32:08.340 }, 00:32:08.340 { 00:32:08.340 "name": "pt3", 00:32:08.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:08.340 "is_configured": true, 00:32:08.340 "data_offset": 2048, 00:32:08.340 "data_size": 63488 00:32:08.340 } 00:32:08.340 ] 00:32:08.340 }' 00:32:08.340 18:59:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:08.340 18:59:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:08.909 18:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:08.909 18:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:09.169 18:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:32:09.169 18:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:09.169 18:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:32:09.429 [2024-07-25 18:59:09.806634] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 99f013f9-feb5-4aae-978e-c54b7a24e0c7 '!=' 99f013f9-feb5-4aae-978e-c54b7a24e0c7 ']' 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 151058 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 151058 ']' 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 151058 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 151058 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 151058' 00:32:09.429 killing process with pid 151058 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 151058 00:32:09.429 18:59:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 151058 00:32:09.429 [2024-07-25 18:59:09.856266] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:09.429 [2024-07-25 18:59:09.856350] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:09.429 [2024-07-25 18:59:09.856552] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:09.429 [2024-07-25 18:59:09.856663] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:32:09.688 [2024-07-25 18:59:10.116523] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:11.070 18:59:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:32:11.070 00:32:11.070 real 0m20.747s 00:32:11.070 user 0m36.932s 00:32:11.070 sys 0m3.484s 00:32:11.070 18:59:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.070 18:59:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.070 ************************************ 00:32:11.070 END TEST raid5f_superblock_test 00:32:11.070 ************************************ 00:32:11.070 18:59:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # '[' true = true ']' 00:32:11.070 18:59:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:32:11.070 18:59:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:11.070 18:59:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:11.070 18:59:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:11.070 ************************************ 00:32:11.070 START TEST raid5f_rebuild_test 00:32:11.070 ************************************ 00:32:11.070 18:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:32:11.070 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:32:11.070 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:32:11.070 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:32:11.070 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:32:11.070 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:32:11.070 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=151768 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 151768 /var/tmp/spdk-raid.sock 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 151768 ']' 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:11.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:11.071 18:59:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.071 [2024-07-25 18:59:11.482469] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:11.071 [2024-07-25 18:59:11.482851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151768 ] 00:32:11.071 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:11.071 Zero copy mechanism will not be used. 00:32:11.071 [2024-07-25 18:59:11.646742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.331 [2024-07-25 18:59:11.894647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.900 [2024-07-25 18:59:12.173717] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:11.900 18:59:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:11.900 18:59:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:32:11.900 18:59:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:11.900 18:59:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:12.160 BaseBdev1_malloc 00:32:12.160 18:59:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:12.420 [2024-07-25 18:59:12.944503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:12.420 [2024-07-25 18:59:12.944765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.420 [2024-07-25 18:59:12.944897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:32:12.420 [2024-07-25 18:59:12.944989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.420 [2024-07-25 18:59:12.947717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.420 [2024-07-25 18:59:12.947880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:12.420 BaseBdev1 00:32:12.420 18:59:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:12.420 18:59:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:12.680 BaseBdev2_malloc 00:32:12.680 18:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:12.939 [2024-07-25 18:59:13.371146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:12.939 [2024-07-25 18:59:13.371482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.939 [2024-07-25 18:59:13.371562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:32:12.939 [2024-07-25 18:59:13.371869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.939 [2024-07-25 18:59:13.374525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.939 [2024-07-25 18:59:13.374704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:12.939 BaseBdev2 00:32:12.939 18:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:12.939 18:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:13.199 BaseBdev3_malloc 00:32:13.199 18:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:13.199 [2024-07-25 18:59:13.778277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:13.199 [2024-07-25 18:59:13.778549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:13.199 [2024-07-25 18:59:13.778624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:13.199 [2024-07-25 18:59:13.778727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:13.459 [2024-07-25 18:59:13.781354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:13.459 [2024-07-25 18:59:13.781525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:13.459 BaseBdev3 00:32:13.459 18:59:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:13.719 spare_malloc 00:32:13.719 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:13.978 spare_delay 00:32:13.978 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:13.978 [2024-07-25 18:59:14.525646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:13.978 [2024-07-25 18:59:14.525893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:13.978 [2024-07-25 18:59:14.525968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:13.978 [2024-07-25 18:59:14.526066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:13.978 [2024-07-25 18:59:14.528623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:13.978 [2024-07-25 18:59:14.528786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:13.978 spare 00:32:13.978 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:32:14.238 [2024-07-25 18:59:14.693722] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:14.238 [2024-07-25 18:59:14.696051] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:14.238 [2024-07-25 18:59:14.696231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:14.238 [2024-07-25 18:59:14.696347] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:32:14.238 [2024-07-25 18:59:14.696444] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:14.238 [2024-07-25 18:59:14.696624] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:14.238 [2024-07-25 18:59:14.703505] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:32:14.238 [2024-07-25 18:59:14.703620] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:32:14.238 [2024-07-25 18:59:14.703941] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.239 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.498 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:14.498 "name": "raid_bdev1", 00:32:14.498 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:14.498 "strip_size_kb": 64, 00:32:14.498 "state": "online", 00:32:14.498 "raid_level": "raid5f", 00:32:14.498 "superblock": false, 00:32:14.498 "num_base_bdevs": 3, 00:32:14.498 "num_base_bdevs_discovered": 3, 00:32:14.498 "num_base_bdevs_operational": 3, 00:32:14.498 "base_bdevs_list": [ 00:32:14.498 { 00:32:14.498 "name": "BaseBdev1", 00:32:14.498 "uuid": "a2bb0aad-7888-5a7c-bbdd-577a44a4d6ed", 00:32:14.498 "is_configured": true, 00:32:14.498 "data_offset": 0, 00:32:14.498 "data_size": 65536 00:32:14.498 }, 00:32:14.498 { 00:32:14.498 "name": "BaseBdev2", 00:32:14.498 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:14.498 "is_configured": true, 00:32:14.498 "data_offset": 0, 00:32:14.498 "data_size": 65536 00:32:14.498 }, 00:32:14.498 { 00:32:14.498 "name": "BaseBdev3", 00:32:14.498 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:14.498 "is_configured": true, 00:32:14.498 "data_offset": 0, 00:32:14.498 "data_size": 65536 00:32:14.498 } 00:32:14.498 ] 00:32:14.498 }' 00:32:14.498 18:59:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:14.498 18:59:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.066 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:15.066 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:32:15.330 [2024-07-25 18:59:15.663174] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:15.330 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=131072 00:32:15.330 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:15.330 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:15.588 18:59:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:15.588 [2024-07-25 18:59:16.083153] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:15.588 /dev/nbd0 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:15.588 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:15.847 1+0 records in 00:32:15.847 1+0 records out 00:32:15.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444385 s, 9.2 MB/s 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 128 00:32:15.847 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:32:16.106 512+0 records in 00:32:16.106 512+0 records out 00:32:16.106 67108864 bytes (67 MB, 64 MiB) copied, 0.412798 s, 163 MB/s 00:32:16.106 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:16.106 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:16.106 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:16.106 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:16.106 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:16.107 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:16.107 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:16.366 [2024-07-25 18:59:16.791122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:16.366 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:16.626 [2024-07-25 18:59:16.962918] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.626 18:59:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.626 18:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:16.626 "name": "raid_bdev1", 00:32:16.626 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:16.626 "strip_size_kb": 64, 00:32:16.626 "state": "online", 00:32:16.626 "raid_level": "raid5f", 00:32:16.626 "superblock": false, 00:32:16.626 "num_base_bdevs": 3, 00:32:16.626 "num_base_bdevs_discovered": 2, 00:32:16.626 "num_base_bdevs_operational": 2, 00:32:16.626 "base_bdevs_list": [ 00:32:16.626 { 00:32:16.626 "name": null, 00:32:16.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.626 "is_configured": false, 00:32:16.626 "data_offset": 0, 00:32:16.626 "data_size": 65536 00:32:16.626 }, 00:32:16.626 { 00:32:16.626 "name": "BaseBdev2", 00:32:16.626 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:16.626 "is_configured": true, 00:32:16.626 "data_offset": 0, 00:32:16.626 "data_size": 65536 00:32:16.626 }, 00:32:16.626 { 00:32:16.626 "name": "BaseBdev3", 00:32:16.626 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:16.626 "is_configured": true, 00:32:16.626 "data_offset": 0, 00:32:16.626 "data_size": 65536 00:32:16.626 } 00:32:16.626 ] 00:32:16.626 }' 00:32:16.626 18:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:16.626 18:59:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.194 18:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:17.454 [2024-07-25 18:59:17.967113] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:17.454 [2024-07-25 18:59:17.986243] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:32:17.454 [2024-07-25 18:59:17.995116] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:17.454 18:59:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:18.835 "name": "raid_bdev1", 00:32:18.835 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:18.835 "strip_size_kb": 64, 00:32:18.835 "state": "online", 00:32:18.835 "raid_level": "raid5f", 00:32:18.835 "superblock": false, 00:32:18.835 "num_base_bdevs": 3, 00:32:18.835 "num_base_bdevs_discovered": 3, 00:32:18.835 "num_base_bdevs_operational": 3, 00:32:18.835 "process": { 00:32:18.835 "type": "rebuild", 00:32:18.835 "target": "spare", 00:32:18.835 "progress": { 00:32:18.835 "blocks": 24576, 00:32:18.835 "percent": 18 00:32:18.835 } 00:32:18.835 }, 00:32:18.835 "base_bdevs_list": [ 00:32:18.835 { 00:32:18.835 "name": "spare", 00:32:18.835 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:18.835 "is_configured": true, 00:32:18.835 "data_offset": 0, 00:32:18.835 "data_size": 65536 00:32:18.835 }, 00:32:18.835 { 00:32:18.835 "name": "BaseBdev2", 00:32:18.835 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:18.835 "is_configured": true, 00:32:18.835 "data_offset": 0, 00:32:18.835 "data_size": 65536 00:32:18.835 }, 00:32:18.835 { 00:32:18.835 "name": "BaseBdev3", 00:32:18.835 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:18.835 "is_configured": true, 00:32:18.835 "data_offset": 0, 00:32:18.835 "data_size": 65536 00:32:18.835 } 00:32:18.835 ] 00:32:18.835 }' 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:18.835 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:19.095 [2024-07-25 18:59:19.592952] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:19.095 [2024-07-25 18:59:19.609144] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:19.095 [2024-07-25 18:59:19.609332] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:19.095 [2024-07-25 18:59:19.609381] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:19.095 [2024-07-25 18:59:19.609453] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.095 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.355 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:19.355 "name": "raid_bdev1", 00:32:19.355 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:19.355 "strip_size_kb": 64, 00:32:19.355 "state": "online", 00:32:19.355 "raid_level": "raid5f", 00:32:19.355 "superblock": false, 00:32:19.355 "num_base_bdevs": 3, 00:32:19.355 "num_base_bdevs_discovered": 2, 00:32:19.355 "num_base_bdevs_operational": 2, 00:32:19.355 "base_bdevs_list": [ 00:32:19.355 { 00:32:19.355 "name": null, 00:32:19.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.355 "is_configured": false, 00:32:19.355 "data_offset": 0, 00:32:19.355 "data_size": 65536 00:32:19.355 }, 00:32:19.355 { 00:32:19.355 "name": "BaseBdev2", 00:32:19.355 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:19.355 "is_configured": true, 00:32:19.355 "data_offset": 0, 00:32:19.355 "data_size": 65536 00:32:19.355 }, 00:32:19.355 { 00:32:19.355 "name": "BaseBdev3", 00:32:19.355 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:19.355 "is_configured": true, 00:32:19.355 "data_offset": 0, 00:32:19.355 "data_size": 65536 00:32:19.355 } 00:32:19.355 ] 00:32:19.355 }' 00:32:19.355 18:59:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:19.355 18:59:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.293 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:20.293 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:20.293 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:20.293 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:20.293 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:20.294 "name": "raid_bdev1", 00:32:20.294 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:20.294 "strip_size_kb": 64, 00:32:20.294 "state": "online", 00:32:20.294 "raid_level": "raid5f", 00:32:20.294 "superblock": false, 00:32:20.294 "num_base_bdevs": 3, 00:32:20.294 "num_base_bdevs_discovered": 2, 00:32:20.294 "num_base_bdevs_operational": 2, 00:32:20.294 "base_bdevs_list": [ 00:32:20.294 { 00:32:20.294 "name": null, 00:32:20.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.294 "is_configured": false, 00:32:20.294 "data_offset": 0, 00:32:20.294 "data_size": 65536 00:32:20.294 }, 00:32:20.294 { 00:32:20.294 "name": "BaseBdev2", 00:32:20.294 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:20.294 "is_configured": true, 00:32:20.294 "data_offset": 0, 00:32:20.294 "data_size": 65536 00:32:20.294 }, 00:32:20.294 { 00:32:20.294 "name": "BaseBdev3", 00:32:20.294 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:20.294 "is_configured": true, 00:32:20.294 "data_offset": 0, 00:32:20.294 "data_size": 65536 00:32:20.294 } 00:32:20.294 ] 00:32:20.294 }' 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:20.294 18:59:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:20.553 [2024-07-25 18:59:21.104970] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:20.553 [2024-07-25 18:59:21.122643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:32:20.553 [2024-07-25 18:59:21.130835] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:20.813 18:59:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:21.752 "name": "raid_bdev1", 00:32:21.752 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:21.752 "strip_size_kb": 64, 00:32:21.752 "state": "online", 00:32:21.752 "raid_level": "raid5f", 00:32:21.752 "superblock": false, 00:32:21.752 "num_base_bdevs": 3, 00:32:21.752 "num_base_bdevs_discovered": 3, 00:32:21.752 "num_base_bdevs_operational": 3, 00:32:21.752 "process": { 00:32:21.752 "type": "rebuild", 00:32:21.752 "target": "spare", 00:32:21.752 "progress": { 00:32:21.752 "blocks": 22528, 00:32:21.752 "percent": 17 00:32:21.752 } 00:32:21.752 }, 00:32:21.752 "base_bdevs_list": [ 00:32:21.752 { 00:32:21.752 "name": "spare", 00:32:21.752 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:21.752 "is_configured": true, 00:32:21.752 "data_offset": 0, 00:32:21.752 "data_size": 65536 00:32:21.752 }, 00:32:21.752 { 00:32:21.752 "name": "BaseBdev2", 00:32:21.752 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:21.752 "is_configured": true, 00:32:21.752 "data_offset": 0, 00:32:21.752 "data_size": 65536 00:32:21.752 }, 00:32:21.752 { 00:32:21.752 "name": "BaseBdev3", 00:32:21.752 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:21.752 "is_configured": true, 00:32:21.752 "data_offset": 0, 00:32:21.752 "data_size": 65536 00:32:21.752 } 00:32:21.752 ] 00:32:21.752 }' 00:32:21.752 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1099 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.012 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.272 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:22.272 "name": "raid_bdev1", 00:32:22.272 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:22.272 "strip_size_kb": 64, 00:32:22.272 "state": "online", 00:32:22.272 "raid_level": "raid5f", 00:32:22.272 "superblock": false, 00:32:22.272 "num_base_bdevs": 3, 00:32:22.272 "num_base_bdevs_discovered": 3, 00:32:22.272 "num_base_bdevs_operational": 3, 00:32:22.272 "process": { 00:32:22.272 "type": "rebuild", 00:32:22.272 "target": "spare", 00:32:22.272 "progress": { 00:32:22.272 "blocks": 28672, 00:32:22.272 "percent": 21 00:32:22.272 } 00:32:22.272 }, 00:32:22.272 "base_bdevs_list": [ 00:32:22.272 { 00:32:22.272 "name": "spare", 00:32:22.272 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:22.272 "is_configured": true, 00:32:22.272 "data_offset": 0, 00:32:22.272 "data_size": 65536 00:32:22.272 }, 00:32:22.272 { 00:32:22.272 "name": "BaseBdev2", 00:32:22.272 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:22.272 "is_configured": true, 00:32:22.272 "data_offset": 0, 00:32:22.272 "data_size": 65536 00:32:22.272 }, 00:32:22.272 { 00:32:22.272 "name": "BaseBdev3", 00:32:22.272 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:22.272 "is_configured": true, 00:32:22.272 "data_offset": 0, 00:32:22.272 "data_size": 65536 00:32:22.272 } 00:32:22.272 ] 00:32:22.272 }' 00:32:22.272 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:22.272 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:22.272 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:22.272 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:22.272 18:59:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:23.210 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:23.211 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:23.211 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.211 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:23.211 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:23.211 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.211 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.211 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.470 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.470 "name": "raid_bdev1", 00:32:23.470 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:23.470 "strip_size_kb": 64, 00:32:23.470 "state": "online", 00:32:23.470 "raid_level": "raid5f", 00:32:23.470 "superblock": false, 00:32:23.470 "num_base_bdevs": 3, 00:32:23.470 "num_base_bdevs_discovered": 3, 00:32:23.470 "num_base_bdevs_operational": 3, 00:32:23.470 "process": { 00:32:23.470 "type": "rebuild", 00:32:23.470 "target": "spare", 00:32:23.470 "progress": { 00:32:23.470 "blocks": 57344, 00:32:23.470 "percent": 43 00:32:23.470 } 00:32:23.470 }, 00:32:23.470 "base_bdevs_list": [ 00:32:23.470 { 00:32:23.470 "name": "spare", 00:32:23.470 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:23.470 "is_configured": true, 00:32:23.470 "data_offset": 0, 00:32:23.470 "data_size": 65536 00:32:23.470 }, 00:32:23.470 { 00:32:23.470 "name": "BaseBdev2", 00:32:23.470 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:23.470 "is_configured": true, 00:32:23.470 "data_offset": 0, 00:32:23.470 "data_size": 65536 00:32:23.470 }, 00:32:23.470 { 00:32:23.470 "name": "BaseBdev3", 00:32:23.470 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:23.470 "is_configured": true, 00:32:23.470 "data_offset": 0, 00:32:23.470 "data_size": 65536 00:32:23.470 } 00:32:23.470 ] 00:32:23.470 }' 00:32:23.470 18:59:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.470 18:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:23.470 18:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.729 18:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:23.729 18:59:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.668 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.928 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.928 "name": "raid_bdev1", 00:32:24.928 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:24.928 "strip_size_kb": 64, 00:32:24.928 "state": "online", 00:32:24.928 "raid_level": "raid5f", 00:32:24.928 "superblock": false, 00:32:24.928 "num_base_bdevs": 3, 00:32:24.928 "num_base_bdevs_discovered": 3, 00:32:24.928 "num_base_bdevs_operational": 3, 00:32:24.928 "process": { 00:32:24.928 "type": "rebuild", 00:32:24.928 "target": "spare", 00:32:24.928 "progress": { 00:32:24.928 "blocks": 83968, 00:32:24.928 "percent": 64 00:32:24.928 } 00:32:24.928 }, 00:32:24.928 "base_bdevs_list": [ 00:32:24.928 { 00:32:24.928 "name": "spare", 00:32:24.928 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:24.928 "is_configured": true, 00:32:24.928 "data_offset": 0, 00:32:24.928 "data_size": 65536 00:32:24.928 }, 00:32:24.928 { 00:32:24.928 "name": "BaseBdev2", 00:32:24.928 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:24.928 "is_configured": true, 00:32:24.928 "data_offset": 0, 00:32:24.928 "data_size": 65536 00:32:24.928 }, 00:32:24.928 { 00:32:24.928 "name": "BaseBdev3", 00:32:24.928 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:24.928 "is_configured": true, 00:32:24.928 "data_offset": 0, 00:32:24.928 "data_size": 65536 00:32:24.928 } 00:32:24.928 ] 00:32:24.928 }' 00:32:24.928 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:24.928 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:24.928 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:24.928 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:24.928 18:59:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.865 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.124 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:26.124 "name": "raid_bdev1", 00:32:26.124 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:26.124 "strip_size_kb": 64, 00:32:26.124 "state": "online", 00:32:26.124 "raid_level": "raid5f", 00:32:26.124 "superblock": false, 00:32:26.124 "num_base_bdevs": 3, 00:32:26.124 "num_base_bdevs_discovered": 3, 00:32:26.124 "num_base_bdevs_operational": 3, 00:32:26.124 "process": { 00:32:26.124 "type": "rebuild", 00:32:26.124 "target": "spare", 00:32:26.124 "progress": { 00:32:26.124 "blocks": 110592, 00:32:26.124 "percent": 84 00:32:26.124 } 00:32:26.124 }, 00:32:26.124 "base_bdevs_list": [ 00:32:26.124 { 00:32:26.124 "name": "spare", 00:32:26.124 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:26.124 "is_configured": true, 00:32:26.124 "data_offset": 0, 00:32:26.124 "data_size": 65536 00:32:26.124 }, 00:32:26.124 { 00:32:26.124 "name": "BaseBdev2", 00:32:26.124 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:26.124 "is_configured": true, 00:32:26.124 "data_offset": 0, 00:32:26.124 "data_size": 65536 00:32:26.124 }, 00:32:26.124 { 00:32:26.124 "name": "BaseBdev3", 00:32:26.124 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:26.124 "is_configured": true, 00:32:26.124 "data_offset": 0, 00:32:26.124 "data_size": 65536 00:32:26.124 } 00:32:26.124 ] 00:32:26.124 }' 00:32:26.124 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:26.383 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:26.383 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:26.383 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:26.383 18:59:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:27.320 [2024-07-25 18:59:27.587020] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:27.320 [2024-07-25 18:59:27.587233] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:27.320 [2024-07-25 18:59:27.587416] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.320 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:27.320 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:27.320 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:27.320 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:27.320 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:27.320 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:27.320 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.321 18:59:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:27.579 "name": "raid_bdev1", 00:32:27.579 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:27.579 "strip_size_kb": 64, 00:32:27.579 "state": "online", 00:32:27.579 "raid_level": "raid5f", 00:32:27.579 "superblock": false, 00:32:27.579 "num_base_bdevs": 3, 00:32:27.579 "num_base_bdevs_discovered": 3, 00:32:27.579 "num_base_bdevs_operational": 3, 00:32:27.579 "base_bdevs_list": [ 00:32:27.579 { 00:32:27.579 "name": "spare", 00:32:27.579 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:27.579 "is_configured": true, 00:32:27.579 "data_offset": 0, 00:32:27.579 "data_size": 65536 00:32:27.579 }, 00:32:27.579 { 00:32:27.579 "name": "BaseBdev2", 00:32:27.579 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:27.579 "is_configured": true, 00:32:27.579 "data_offset": 0, 00:32:27.579 "data_size": 65536 00:32:27.579 }, 00:32:27.579 { 00:32:27.579 "name": "BaseBdev3", 00:32:27.579 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:27.579 "is_configured": true, 00:32:27.579 "data_offset": 0, 00:32:27.579 "data_size": 65536 00:32:27.579 } 00:32:27.579 ] 00:32:27.579 }' 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.579 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.838 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:27.838 "name": "raid_bdev1", 00:32:27.838 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:27.838 "strip_size_kb": 64, 00:32:27.838 "state": "online", 00:32:27.838 "raid_level": "raid5f", 00:32:27.838 "superblock": false, 00:32:27.838 "num_base_bdevs": 3, 00:32:27.838 "num_base_bdevs_discovered": 3, 00:32:27.838 "num_base_bdevs_operational": 3, 00:32:27.838 "base_bdevs_list": [ 00:32:27.838 { 00:32:27.838 "name": "spare", 00:32:27.838 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:27.838 "is_configured": true, 00:32:27.838 "data_offset": 0, 00:32:27.838 "data_size": 65536 00:32:27.838 }, 00:32:27.838 { 00:32:27.838 "name": "BaseBdev2", 00:32:27.838 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:27.838 "is_configured": true, 00:32:27.838 "data_offset": 0, 00:32:27.838 "data_size": 65536 00:32:27.838 }, 00:32:27.838 { 00:32:27.838 "name": "BaseBdev3", 00:32:27.839 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:27.839 "is_configured": true, 00:32:27.839 "data_offset": 0, 00:32:27.839 "data_size": 65536 00:32:27.839 } 00:32:27.839 ] 00:32:27.839 }' 00:32:27.839 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:27.839 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:27.839 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.097 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.356 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:28.356 "name": "raid_bdev1", 00:32:28.356 "uuid": "3173dc89-59d4-47d4-a485-9bbc03f24229", 00:32:28.356 "strip_size_kb": 64, 00:32:28.356 "state": "online", 00:32:28.356 "raid_level": "raid5f", 00:32:28.356 "superblock": false, 00:32:28.356 "num_base_bdevs": 3, 00:32:28.356 "num_base_bdevs_discovered": 3, 00:32:28.356 "num_base_bdevs_operational": 3, 00:32:28.356 "base_bdevs_list": [ 00:32:28.356 { 00:32:28.356 "name": "spare", 00:32:28.356 "uuid": "c2dc998b-21d2-5082-8b6e-7c2d3d430e63", 00:32:28.356 "is_configured": true, 00:32:28.356 "data_offset": 0, 00:32:28.356 "data_size": 65536 00:32:28.356 }, 00:32:28.356 { 00:32:28.356 "name": "BaseBdev2", 00:32:28.356 "uuid": "52a0eff7-7eef-595a-b478-95b8f16dea99", 00:32:28.356 "is_configured": true, 00:32:28.356 "data_offset": 0, 00:32:28.356 "data_size": 65536 00:32:28.356 }, 00:32:28.356 { 00:32:28.356 "name": "BaseBdev3", 00:32:28.356 "uuid": "aeeec902-85fb-5df7-ad22-b05a95698a58", 00:32:28.356 "is_configured": true, 00:32:28.356 "data_offset": 0, 00:32:28.356 "data_size": 65536 00:32:28.356 } 00:32:28.356 ] 00:32:28.356 }' 00:32:28.356 18:59:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:28.356 18:59:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.925 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:28.925 [2024-07-25 18:59:29.366673] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:28.925 [2024-07-25 18:59:29.366842] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:28.925 [2024-07-25 18:59:29.367067] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:28.925 [2024-07-25 18:59:29.367262] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:28.925 [2024-07-25 18:59:29.367350] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:32:28.925 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.925 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:29.184 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:29.444 /dev/nbd0 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:29.444 1+0 records in 00:32:29.444 1+0 records out 00:32:29.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574554 s, 7.1 MB/s 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:29.444 18:59:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:29.704 /dev/nbd1 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:29.963 1+0 records in 00:32:29.963 1+0 records out 00:32:29.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460862 s, 8.9 MB/s 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:29.963 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:30.223 18:59:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:30.481 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 151768 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 151768 ']' 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 151768 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 151768 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 151768' 00:32:30.741 killing process with pid 151768 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 151768 00:32:30.741 Received shutdown signal, test time was about 60.000000 seconds 00:32:30.741 00:32:30.741 Latency(us) 00:32:30.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.741 =================================================================================================================== 00:32:30.741 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:30.741 18:59:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 151768 00:32:30.741 [2024-07-25 18:59:31.097366] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:31.000 [2024-07-25 18:59:31.526690] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:32.478 ************************************ 00:32:32.478 END TEST raid5f_rebuild_test 00:32:32.478 ************************************ 00:32:32.478 18:59:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:32:32.478 00:32:32.478 real 0m21.576s 00:32:32.478 user 0m31.157s 00:32:32.478 sys 0m3.273s 00:32:32.478 18:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:32.478 18:59:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.478 18:59:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:32:32.478 18:59:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:32.478 18:59:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:32.478 18:59:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:32.738 ************************************ 00:32:32.738 START TEST raid5f_rebuild_test_sb 00:32:32.738 ************************************ 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=152318 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 152318 /var/tmp/spdk-raid.sock 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 152318 ']' 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:32.738 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:32.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:32.739 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:32.739 18:59:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.739 [2024-07-25 18:59:33.150136] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:32.739 [2024-07-25 18:59:33.150514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152318 ] 00:32:32.739 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:32.739 Zero copy mechanism will not be used. 00:32:32.739 [2024-07-25 18:59:33.316421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.997 [2024-07-25 18:59:33.564989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.565 [2024-07-25 18:59:33.844083] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:33.565 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:33.565 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:32:33.565 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:33.565 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:33.824 BaseBdev1_malloc 00:32:33.824 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:34.084 [2024-07-25 18:59:34.607968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:34.084 [2024-07-25 18:59:34.608227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.084 [2024-07-25 18:59:34.608297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:32:34.084 [2024-07-25 18:59:34.608400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.084 [2024-07-25 18:59:34.611056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.084 [2024-07-25 18:59:34.611228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:34.084 BaseBdev1 00:32:34.084 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:34.084 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:34.344 BaseBdev2_malloc 00:32:34.344 18:59:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:34.604 [2024-07-25 18:59:35.012023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:34.604 [2024-07-25 18:59:35.012279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.604 [2024-07-25 18:59:35.012348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:32:34.604 [2024-07-25 18:59:35.012454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.604 [2024-07-25 18:59:35.014934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.604 [2024-07-25 18:59:35.015104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:34.604 BaseBdev2 00:32:34.604 18:59:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:34.604 18:59:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:34.864 BaseBdev3_malloc 00:32:34.864 18:59:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:34.864 [2024-07-25 18:59:35.401371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:34.864 [2024-07-25 18:59:35.401591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.864 [2024-07-25 18:59:35.401658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:34.864 [2024-07-25 18:59:35.401753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.864 [2024-07-25 18:59:35.404296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.864 [2024-07-25 18:59:35.404462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:34.864 BaseBdev3 00:32:34.864 18:59:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:35.123 spare_malloc 00:32:35.124 18:59:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:35.383 spare_delay 00:32:35.383 18:59:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:35.643 [2024-07-25 18:59:36.028882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:35.643 [2024-07-25 18:59:36.029110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.643 [2024-07-25 18:59:36.029182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:35.643 [2024-07-25 18:59:36.029277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.643 [2024-07-25 18:59:36.031763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.643 [2024-07-25 18:59:36.031932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:35.643 spare 00:32:35.643 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:32:35.643 [2024-07-25 18:59:36.200997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:35.643 [2024-07-25 18:59:36.203317] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:35.643 [2024-07-25 18:59:36.203505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:35.643 [2024-07-25 18:59:36.203713] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:32:35.643 [2024-07-25 18:59:36.203860] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:35.643 [2024-07-25 18:59:36.204084] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:35.643 [2024-07-25 18:59:36.210291] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:32:35.643 [2024-07-25 18:59:36.210403] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:32:35.643 [2024-07-25 18:59:36.210661] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:35.903 "name": "raid_bdev1", 00:32:35.903 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:35.903 "strip_size_kb": 64, 00:32:35.903 "state": "online", 00:32:35.903 "raid_level": "raid5f", 00:32:35.903 "superblock": true, 00:32:35.903 "num_base_bdevs": 3, 00:32:35.903 "num_base_bdevs_discovered": 3, 00:32:35.903 "num_base_bdevs_operational": 3, 00:32:35.903 "base_bdevs_list": [ 00:32:35.903 { 00:32:35.903 "name": "BaseBdev1", 00:32:35.903 "uuid": "c7b9f20a-61f6-5041-afc7-f05fc908ad93", 00:32:35.903 "is_configured": true, 00:32:35.903 "data_offset": 2048, 00:32:35.903 "data_size": 63488 00:32:35.903 }, 00:32:35.903 { 00:32:35.903 "name": "BaseBdev2", 00:32:35.903 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:35.903 "is_configured": true, 00:32:35.903 "data_offset": 2048, 00:32:35.903 "data_size": 63488 00:32:35.903 }, 00:32:35.903 { 00:32:35.903 "name": "BaseBdev3", 00:32:35.903 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:35.903 "is_configured": true, 00:32:35.903 "data_offset": 2048, 00:32:35.903 "data_size": 63488 00:32:35.903 } 00:32:35.903 ] 00:32:35.903 }' 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:35.903 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.473 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:32:36.473 18:59:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:36.734 [2024-07-25 18:59:37.130105] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:36.734 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=126976 00:32:36.734 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:36.734 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:36.993 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:37.253 [2024-07-25 18:59:37.630075] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:37.253 /dev/nbd0 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:37.253 1+0 records in 00:32:37.253 1+0 records out 00:32:37.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646217 s, 6.3 MB/s 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 128 00:32:37.253 18:59:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:32:37.821 496+0 records in 00:32:37.821 496+0 records out 00:32:37.821 65011712 bytes (65 MB, 62 MiB) copied, 0.416922 s, 156 MB/s 00:32:37.821 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:37.821 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:37.821 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:37.821 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:37.821 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:37.821 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:37.821 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:38.080 [2024-07-25 18:59:38.410705] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:38.080 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:38.081 [2024-07-25 18:59:38.638441] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.081 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.340 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.340 "name": "raid_bdev1", 00:32:38.340 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:38.340 "strip_size_kb": 64, 00:32:38.340 "state": "online", 00:32:38.340 "raid_level": "raid5f", 00:32:38.340 "superblock": true, 00:32:38.340 "num_base_bdevs": 3, 00:32:38.340 "num_base_bdevs_discovered": 2, 00:32:38.340 "num_base_bdevs_operational": 2, 00:32:38.340 "base_bdevs_list": [ 00:32:38.340 { 00:32:38.340 "name": null, 00:32:38.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.340 "is_configured": false, 00:32:38.340 "data_offset": 2048, 00:32:38.340 "data_size": 63488 00:32:38.340 }, 00:32:38.340 { 00:32:38.340 "name": "BaseBdev2", 00:32:38.340 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:38.340 "is_configured": true, 00:32:38.340 "data_offset": 2048, 00:32:38.340 "data_size": 63488 00:32:38.340 }, 00:32:38.340 { 00:32:38.340 "name": "BaseBdev3", 00:32:38.340 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:38.340 "is_configured": true, 00:32:38.340 "data_offset": 2048, 00:32:38.340 "data_size": 63488 00:32:38.340 } 00:32:38.340 ] 00:32:38.340 }' 00:32:38.340 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.340 18:59:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.908 18:59:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:39.167 [2024-07-25 18:59:39.614658] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:39.168 [2024-07-25 18:59:39.631290] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:32:39.168 [2024-07-25 18:59:39.639223] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:39.168 18:59:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:40.106 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:40.107 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:40.107 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:40.107 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:40.107 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:40.107 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.107 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.366 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:40.366 "name": "raid_bdev1", 00:32:40.366 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:40.366 "strip_size_kb": 64, 00:32:40.366 "state": "online", 00:32:40.366 "raid_level": "raid5f", 00:32:40.366 "superblock": true, 00:32:40.366 "num_base_bdevs": 3, 00:32:40.366 "num_base_bdevs_discovered": 3, 00:32:40.366 "num_base_bdevs_operational": 3, 00:32:40.366 "process": { 00:32:40.366 "type": "rebuild", 00:32:40.366 "target": "spare", 00:32:40.366 "progress": { 00:32:40.366 "blocks": 24576, 00:32:40.366 "percent": 19 00:32:40.366 } 00:32:40.366 }, 00:32:40.366 "base_bdevs_list": [ 00:32:40.366 { 00:32:40.366 "name": "spare", 00:32:40.366 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:40.366 "is_configured": true, 00:32:40.366 "data_offset": 2048, 00:32:40.366 "data_size": 63488 00:32:40.366 }, 00:32:40.366 { 00:32:40.366 "name": "BaseBdev2", 00:32:40.366 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:40.366 "is_configured": true, 00:32:40.366 "data_offset": 2048, 00:32:40.366 "data_size": 63488 00:32:40.366 }, 00:32:40.366 { 00:32:40.366 "name": "BaseBdev3", 00:32:40.366 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:40.366 "is_configured": true, 00:32:40.366 "data_offset": 2048, 00:32:40.366 "data_size": 63488 00:32:40.366 } 00:32:40.366 ] 00:32:40.366 }' 00:32:40.366 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:40.366 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:40.366 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:40.625 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:40.625 18:59:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:40.884 [2024-07-25 18:59:41.212508] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:40.884 [2024-07-25 18:59:41.253288] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:40.884 [2024-07-25 18:59:41.253475] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:40.884 [2024-07-25 18:59:41.253525] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:40.884 [2024-07-25 18:59:41.253597] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.884 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.143 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:41.143 "name": "raid_bdev1", 00:32:41.143 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:41.143 "strip_size_kb": 64, 00:32:41.143 "state": "online", 00:32:41.143 "raid_level": "raid5f", 00:32:41.143 "superblock": true, 00:32:41.143 "num_base_bdevs": 3, 00:32:41.143 "num_base_bdevs_discovered": 2, 00:32:41.143 "num_base_bdevs_operational": 2, 00:32:41.143 "base_bdevs_list": [ 00:32:41.143 { 00:32:41.143 "name": null, 00:32:41.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.143 "is_configured": false, 00:32:41.143 "data_offset": 2048, 00:32:41.143 "data_size": 63488 00:32:41.143 }, 00:32:41.143 { 00:32:41.143 "name": "BaseBdev2", 00:32:41.143 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:41.143 "is_configured": true, 00:32:41.143 "data_offset": 2048, 00:32:41.143 "data_size": 63488 00:32:41.143 }, 00:32:41.143 { 00:32:41.143 "name": "BaseBdev3", 00:32:41.143 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:41.143 "is_configured": true, 00:32:41.143 "data_offset": 2048, 00:32:41.143 "data_size": 63488 00:32:41.143 } 00:32:41.143 ] 00:32:41.143 }' 00:32:41.143 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:41.143 18:59:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.712 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:41.712 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:41.712 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:41.712 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:41.712 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:41.712 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.712 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.972 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:41.972 "name": "raid_bdev1", 00:32:41.972 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:41.972 "strip_size_kb": 64, 00:32:41.972 "state": "online", 00:32:41.972 "raid_level": "raid5f", 00:32:41.972 "superblock": true, 00:32:41.972 "num_base_bdevs": 3, 00:32:41.972 "num_base_bdevs_discovered": 2, 00:32:41.972 "num_base_bdevs_operational": 2, 00:32:41.972 "base_bdevs_list": [ 00:32:41.972 { 00:32:41.972 "name": null, 00:32:41.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.972 "is_configured": false, 00:32:41.972 "data_offset": 2048, 00:32:41.972 "data_size": 63488 00:32:41.972 }, 00:32:41.972 { 00:32:41.972 "name": "BaseBdev2", 00:32:41.972 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:41.972 "is_configured": true, 00:32:41.972 "data_offset": 2048, 00:32:41.972 "data_size": 63488 00:32:41.972 }, 00:32:41.972 { 00:32:41.972 "name": "BaseBdev3", 00:32:41.972 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:41.972 "is_configured": true, 00:32:41.972 "data_offset": 2048, 00:32:41.972 "data_size": 63488 00:32:41.972 } 00:32:41.972 ] 00:32:41.972 }' 00:32:41.972 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:41.972 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:41.972 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:41.972 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:41.972 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:42.231 [2024-07-25 18:59:42.660998] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:42.231 [2024-07-25 18:59:42.678123] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:32:42.231 [2024-07-25 18:59:42.686835] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:42.231 18:59:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:32:43.169 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:43.169 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:43.169 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:43.169 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:43.169 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:43.169 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.169 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:43.429 "name": "raid_bdev1", 00:32:43.429 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:43.429 "strip_size_kb": 64, 00:32:43.429 "state": "online", 00:32:43.429 "raid_level": "raid5f", 00:32:43.429 "superblock": true, 00:32:43.429 "num_base_bdevs": 3, 00:32:43.429 "num_base_bdevs_discovered": 3, 00:32:43.429 "num_base_bdevs_operational": 3, 00:32:43.429 "process": { 00:32:43.429 "type": "rebuild", 00:32:43.429 "target": "spare", 00:32:43.429 "progress": { 00:32:43.429 "blocks": 22528, 00:32:43.429 "percent": 17 00:32:43.429 } 00:32:43.429 }, 00:32:43.429 "base_bdevs_list": [ 00:32:43.429 { 00:32:43.429 "name": "spare", 00:32:43.429 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:43.429 "is_configured": true, 00:32:43.429 "data_offset": 2048, 00:32:43.429 "data_size": 63488 00:32:43.429 }, 00:32:43.429 { 00:32:43.429 "name": "BaseBdev2", 00:32:43.429 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:43.429 "is_configured": true, 00:32:43.429 "data_offset": 2048, 00:32:43.429 "data_size": 63488 00:32:43.429 }, 00:32:43.429 { 00:32:43.429 "name": "BaseBdev3", 00:32:43.429 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:43.429 "is_configured": true, 00:32:43.429 "data_offset": 2048, 00:32:43.429 "data_size": 63488 00:32:43.429 } 00:32:43.429 ] 00:32:43.429 }' 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:32:43.429 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1121 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:43.429 18:59:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:43.429 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.429 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.688 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:43.688 "name": "raid_bdev1", 00:32:43.688 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:43.689 "strip_size_kb": 64, 00:32:43.689 "state": "online", 00:32:43.689 "raid_level": "raid5f", 00:32:43.689 "superblock": true, 00:32:43.689 "num_base_bdevs": 3, 00:32:43.689 "num_base_bdevs_discovered": 3, 00:32:43.689 "num_base_bdevs_operational": 3, 00:32:43.689 "process": { 00:32:43.689 "type": "rebuild", 00:32:43.689 "target": "spare", 00:32:43.689 "progress": { 00:32:43.689 "blocks": 28672, 00:32:43.689 "percent": 22 00:32:43.689 } 00:32:43.689 }, 00:32:43.689 "base_bdevs_list": [ 00:32:43.689 { 00:32:43.689 "name": "spare", 00:32:43.689 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:43.689 "is_configured": true, 00:32:43.689 "data_offset": 2048, 00:32:43.689 "data_size": 63488 00:32:43.689 }, 00:32:43.689 { 00:32:43.689 "name": "BaseBdev2", 00:32:43.689 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:43.689 "is_configured": true, 00:32:43.689 "data_offset": 2048, 00:32:43.689 "data_size": 63488 00:32:43.689 }, 00:32:43.689 { 00:32:43.689 "name": "BaseBdev3", 00:32:43.689 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:43.689 "is_configured": true, 00:32:43.689 "data_offset": 2048, 00:32:43.689 "data_size": 63488 00:32:43.689 } 00:32:43.689 ] 00:32:43.689 }' 00:32:43.689 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:43.689 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:43.689 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:43.948 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:43.948 18:59:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.887 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.146 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:45.146 "name": "raid_bdev1", 00:32:45.146 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:45.146 "strip_size_kb": 64, 00:32:45.147 "state": "online", 00:32:45.147 "raid_level": "raid5f", 00:32:45.147 "superblock": true, 00:32:45.147 "num_base_bdevs": 3, 00:32:45.147 "num_base_bdevs_discovered": 3, 00:32:45.147 "num_base_bdevs_operational": 3, 00:32:45.147 "process": { 00:32:45.147 "type": "rebuild", 00:32:45.147 "target": "spare", 00:32:45.147 "progress": { 00:32:45.147 "blocks": 57344, 00:32:45.147 "percent": 45 00:32:45.147 } 00:32:45.147 }, 00:32:45.147 "base_bdevs_list": [ 00:32:45.147 { 00:32:45.147 "name": "spare", 00:32:45.147 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:45.147 "is_configured": true, 00:32:45.147 "data_offset": 2048, 00:32:45.147 "data_size": 63488 00:32:45.147 }, 00:32:45.147 { 00:32:45.147 "name": "BaseBdev2", 00:32:45.147 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:45.147 "is_configured": true, 00:32:45.147 "data_offset": 2048, 00:32:45.147 "data_size": 63488 00:32:45.147 }, 00:32:45.147 { 00:32:45.147 "name": "BaseBdev3", 00:32:45.147 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:45.147 "is_configured": true, 00:32:45.147 "data_offset": 2048, 00:32:45.147 "data_size": 63488 00:32:45.147 } 00:32:45.147 ] 00:32:45.147 }' 00:32:45.147 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:45.147 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:45.147 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:45.147 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:45.147 18:59:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.084 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.342 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:46.342 "name": "raid_bdev1", 00:32:46.342 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:46.342 "strip_size_kb": 64, 00:32:46.342 "state": "online", 00:32:46.342 "raid_level": "raid5f", 00:32:46.342 "superblock": true, 00:32:46.342 "num_base_bdevs": 3, 00:32:46.342 "num_base_bdevs_discovered": 3, 00:32:46.342 "num_base_bdevs_operational": 3, 00:32:46.342 "process": { 00:32:46.342 "type": "rebuild", 00:32:46.342 "target": "spare", 00:32:46.342 "progress": { 00:32:46.342 "blocks": 81920, 00:32:46.342 "percent": 64 00:32:46.342 } 00:32:46.342 }, 00:32:46.342 "base_bdevs_list": [ 00:32:46.342 { 00:32:46.342 "name": "spare", 00:32:46.342 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:46.342 "is_configured": true, 00:32:46.342 "data_offset": 2048, 00:32:46.342 "data_size": 63488 00:32:46.342 }, 00:32:46.342 { 00:32:46.342 "name": "BaseBdev2", 00:32:46.342 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:46.342 "is_configured": true, 00:32:46.342 "data_offset": 2048, 00:32:46.342 "data_size": 63488 00:32:46.342 }, 00:32:46.342 { 00:32:46.342 "name": "BaseBdev3", 00:32:46.342 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:46.342 "is_configured": true, 00:32:46.342 "data_offset": 2048, 00:32:46.342 "data_size": 63488 00:32:46.342 } 00:32:46.342 ] 00:32:46.342 }' 00:32:46.342 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:46.342 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:46.342 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:46.342 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:46.342 18:59:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.719 18:59:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.719 18:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:47.719 "name": "raid_bdev1", 00:32:47.719 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:47.719 "strip_size_kb": 64, 00:32:47.719 "state": "online", 00:32:47.719 "raid_level": "raid5f", 00:32:47.719 "superblock": true, 00:32:47.719 "num_base_bdevs": 3, 00:32:47.719 "num_base_bdevs_discovered": 3, 00:32:47.719 "num_base_bdevs_operational": 3, 00:32:47.719 "process": { 00:32:47.719 "type": "rebuild", 00:32:47.719 "target": "spare", 00:32:47.719 "progress": { 00:32:47.719 "blocks": 110592, 00:32:47.719 "percent": 87 00:32:47.719 } 00:32:47.719 }, 00:32:47.719 "base_bdevs_list": [ 00:32:47.719 { 00:32:47.719 "name": "spare", 00:32:47.719 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:47.719 "is_configured": true, 00:32:47.719 "data_offset": 2048, 00:32:47.719 "data_size": 63488 00:32:47.719 }, 00:32:47.719 { 00:32:47.719 "name": "BaseBdev2", 00:32:47.720 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:47.720 "is_configured": true, 00:32:47.720 "data_offset": 2048, 00:32:47.720 "data_size": 63488 00:32:47.720 }, 00:32:47.720 { 00:32:47.720 "name": "BaseBdev3", 00:32:47.720 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:47.720 "is_configured": true, 00:32:47.720 "data_offset": 2048, 00:32:47.720 "data_size": 63488 00:32:47.720 } 00:32:47.720 ] 00:32:47.720 }' 00:32:47.720 18:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:47.720 18:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:47.720 18:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:47.720 18:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:47.720 18:59:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:48.657 [2024-07-25 18:59:48.940038] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:48.657 [2024-07-25 18:59:48.940267] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:48.657 [2024-07-25 18:59:48.940503] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:48.657 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:48.657 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:48.657 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:48.657 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:48.657 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:48.657 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:48.657 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.916 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.175 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.175 "name": "raid_bdev1", 00:32:49.175 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:49.175 "strip_size_kb": 64, 00:32:49.175 "state": "online", 00:32:49.175 "raid_level": "raid5f", 00:32:49.175 "superblock": true, 00:32:49.175 "num_base_bdevs": 3, 00:32:49.175 "num_base_bdevs_discovered": 3, 00:32:49.175 "num_base_bdevs_operational": 3, 00:32:49.176 "base_bdevs_list": [ 00:32:49.176 { 00:32:49.176 "name": "spare", 00:32:49.176 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:49.176 "is_configured": true, 00:32:49.176 "data_offset": 2048, 00:32:49.176 "data_size": 63488 00:32:49.176 }, 00:32:49.176 { 00:32:49.176 "name": "BaseBdev2", 00:32:49.176 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:49.176 "is_configured": true, 00:32:49.176 "data_offset": 2048, 00:32:49.176 "data_size": 63488 00:32:49.176 }, 00:32:49.176 { 00:32:49.176 "name": "BaseBdev3", 00:32:49.176 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:49.176 "is_configured": true, 00:32:49.176 "data_offset": 2048, 00:32:49.176 "data_size": 63488 00:32:49.176 } 00:32:49.176 ] 00:32:49.176 }' 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.176 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.435 "name": "raid_bdev1", 00:32:49.435 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:49.435 "strip_size_kb": 64, 00:32:49.435 "state": "online", 00:32:49.435 "raid_level": "raid5f", 00:32:49.435 "superblock": true, 00:32:49.435 "num_base_bdevs": 3, 00:32:49.435 "num_base_bdevs_discovered": 3, 00:32:49.435 "num_base_bdevs_operational": 3, 00:32:49.435 "base_bdevs_list": [ 00:32:49.435 { 00:32:49.435 "name": "spare", 00:32:49.435 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:49.435 "is_configured": true, 00:32:49.435 "data_offset": 2048, 00:32:49.435 "data_size": 63488 00:32:49.435 }, 00:32:49.435 { 00:32:49.435 "name": "BaseBdev2", 00:32:49.435 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:49.435 "is_configured": true, 00:32:49.435 "data_offset": 2048, 00:32:49.435 "data_size": 63488 00:32:49.435 }, 00:32:49.435 { 00:32:49.435 "name": "BaseBdev3", 00:32:49.435 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:49.435 "is_configured": true, 00:32:49.435 "data_offset": 2048, 00:32:49.435 "data_size": 63488 00:32:49.435 } 00:32:49.435 ] 00:32:49.435 }' 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.435 18:59:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.694 18:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:49.694 "name": "raid_bdev1", 00:32:49.694 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:49.694 "strip_size_kb": 64, 00:32:49.694 "state": "online", 00:32:49.694 "raid_level": "raid5f", 00:32:49.694 "superblock": true, 00:32:49.694 "num_base_bdevs": 3, 00:32:49.694 "num_base_bdevs_discovered": 3, 00:32:49.694 "num_base_bdevs_operational": 3, 00:32:49.694 "base_bdevs_list": [ 00:32:49.694 { 00:32:49.694 "name": "spare", 00:32:49.694 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:49.694 "is_configured": true, 00:32:49.694 "data_offset": 2048, 00:32:49.694 "data_size": 63488 00:32:49.694 }, 00:32:49.694 { 00:32:49.694 "name": "BaseBdev2", 00:32:49.694 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:49.694 "is_configured": true, 00:32:49.694 "data_offset": 2048, 00:32:49.694 "data_size": 63488 00:32:49.694 }, 00:32:49.694 { 00:32:49.694 "name": "BaseBdev3", 00:32:49.694 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:49.694 "is_configured": true, 00:32:49.694 "data_offset": 2048, 00:32:49.694 "data_size": 63488 00:32:49.694 } 00:32:49.694 ] 00:32:49.694 }' 00:32:49.694 18:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:49.694 18:59:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:50.262 18:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:50.521 [2024-07-25 18:59:50.920099] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:50.521 [2024-07-25 18:59:50.920281] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:50.521 [2024-07-25 18:59:50.920497] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:50.521 [2024-07-25 18:59:50.920687] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:50.521 [2024-07-25 18:59:50.920774] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:32:50.521 18:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.521 18:59:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:50.780 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:51.040 /dev/nbd0 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:51.040 1+0 records in 00:32:51.040 1+0 records out 00:32:51.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577882 s, 7.1 MB/s 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:51.040 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:51.299 /dev/nbd1 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:51.299 1+0 records in 00:32:51.299 1+0 records out 00:32:51.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666836 s, 6.1 MB/s 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:51.299 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:51.559 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:51.559 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:51.559 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:51.559 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:51.559 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:51.559 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:51.559 18:59:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:51.820 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:32:52.097 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:52.384 [2024-07-25 18:59:52.900073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:52.384 [2024-07-25 18:59:52.900367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.384 [2024-07-25 18:59:52.900463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:52.384 [2024-07-25 18:59:52.900583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.384 [2024-07-25 18:59:52.903330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.384 [2024-07-25 18:59:52.903513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:52.384 [2024-07-25 18:59:52.903748] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:52.384 [2024-07-25 18:59:52.903933] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:52.384 [2024-07-25 18:59:52.904116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:52.384 [2024-07-25 18:59:52.904375] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:52.384 spare 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.384 18:59:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.643 [2024-07-25 18:59:53.004573] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:32:52.643 [2024-07-25 18:59:53.004695] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:52.643 [2024-07-25 18:59:53.004841] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:32:52.643 [2024-07-25 18:59:53.011371] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:32:52.643 [2024-07-25 18:59:53.011481] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:32:52.643 [2024-07-25 18:59:53.011745] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:52.643 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.643 "name": "raid_bdev1", 00:32:52.643 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:52.643 "strip_size_kb": 64, 00:32:52.643 "state": "online", 00:32:52.643 "raid_level": "raid5f", 00:32:52.643 "superblock": true, 00:32:52.643 "num_base_bdevs": 3, 00:32:52.643 "num_base_bdevs_discovered": 3, 00:32:52.643 "num_base_bdevs_operational": 3, 00:32:52.643 "base_bdevs_list": [ 00:32:52.643 { 00:32:52.643 "name": "spare", 00:32:52.643 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:52.643 "is_configured": true, 00:32:52.643 "data_offset": 2048, 00:32:52.643 "data_size": 63488 00:32:52.643 }, 00:32:52.643 { 00:32:52.643 "name": "BaseBdev2", 00:32:52.643 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:52.643 "is_configured": true, 00:32:52.643 "data_offset": 2048, 00:32:52.643 "data_size": 63488 00:32:52.643 }, 00:32:52.643 { 00:32:52.643 "name": "BaseBdev3", 00:32:52.643 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:52.643 "is_configured": true, 00:32:52.643 "data_offset": 2048, 00:32:52.643 "data_size": 63488 00:32:52.643 } 00:32:52.643 ] 00:32:52.643 }' 00:32:52.643 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.643 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:53.210 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:53.210 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:53.210 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:53.210 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:53.210 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:53.210 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.210 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.468 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:53.468 "name": "raid_bdev1", 00:32:53.468 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:53.468 "strip_size_kb": 64, 00:32:53.468 "state": "online", 00:32:53.468 "raid_level": "raid5f", 00:32:53.468 "superblock": true, 00:32:53.468 "num_base_bdevs": 3, 00:32:53.468 "num_base_bdevs_discovered": 3, 00:32:53.468 "num_base_bdevs_operational": 3, 00:32:53.468 "base_bdevs_list": [ 00:32:53.468 { 00:32:53.468 "name": "spare", 00:32:53.468 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:53.468 "is_configured": true, 00:32:53.468 "data_offset": 2048, 00:32:53.468 "data_size": 63488 00:32:53.468 }, 00:32:53.468 { 00:32:53.468 "name": "BaseBdev2", 00:32:53.468 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:53.468 "is_configured": true, 00:32:53.468 "data_offset": 2048, 00:32:53.468 "data_size": 63488 00:32:53.468 }, 00:32:53.468 { 00:32:53.468 "name": "BaseBdev3", 00:32:53.468 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:53.468 "is_configured": true, 00:32:53.468 "data_offset": 2048, 00:32:53.468 "data_size": 63488 00:32:53.468 } 00:32:53.468 ] 00:32:53.468 }' 00:32:53.468 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:53.468 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:53.468 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:53.468 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:53.468 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.468 18:59:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:53.726 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:32:53.726 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:53.985 [2024-07-25 18:59:54.366974] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.985 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.243 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.243 "name": "raid_bdev1", 00:32:54.243 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:54.243 "strip_size_kb": 64, 00:32:54.243 "state": "online", 00:32:54.243 "raid_level": "raid5f", 00:32:54.243 "superblock": true, 00:32:54.243 "num_base_bdevs": 3, 00:32:54.244 "num_base_bdevs_discovered": 2, 00:32:54.244 "num_base_bdevs_operational": 2, 00:32:54.244 "base_bdevs_list": [ 00:32:54.244 { 00:32:54.244 "name": null, 00:32:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.244 "is_configured": false, 00:32:54.244 "data_offset": 2048, 00:32:54.244 "data_size": 63488 00:32:54.244 }, 00:32:54.244 { 00:32:54.244 "name": "BaseBdev2", 00:32:54.244 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:54.244 "is_configured": true, 00:32:54.244 "data_offset": 2048, 00:32:54.244 "data_size": 63488 00:32:54.244 }, 00:32:54.244 { 00:32:54.244 "name": "BaseBdev3", 00:32:54.244 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:54.244 "is_configured": true, 00:32:54.244 "data_offset": 2048, 00:32:54.244 "data_size": 63488 00:32:54.244 } 00:32:54.244 ] 00:32:54.244 }' 00:32:54.244 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.244 18:59:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:54.812 18:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:54.812 [2024-07-25 18:59:55.247163] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:54.812 [2024-07-25 18:59:55.247542] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:54.812 [2024-07-25 18:59:55.247653] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:54.812 [2024-07-25 18:59:55.247748] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:54.812 [2024-07-25 18:59:55.265886] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047a40 00:32:54.812 [2024-07-25 18:59:55.274026] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:54.812 18:59:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:32:55.748 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:55.748 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:55.748 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:55.748 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:55.748 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:55.748 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.748 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.007 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:56.007 "name": "raid_bdev1", 00:32:56.007 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:56.007 "strip_size_kb": 64, 00:32:56.007 "state": "online", 00:32:56.007 "raid_level": "raid5f", 00:32:56.007 "superblock": true, 00:32:56.007 "num_base_bdevs": 3, 00:32:56.007 "num_base_bdevs_discovered": 3, 00:32:56.007 "num_base_bdevs_operational": 3, 00:32:56.007 "process": { 00:32:56.007 "type": "rebuild", 00:32:56.007 "target": "spare", 00:32:56.007 "progress": { 00:32:56.007 "blocks": 24576, 00:32:56.007 "percent": 19 00:32:56.007 } 00:32:56.007 }, 00:32:56.007 "base_bdevs_list": [ 00:32:56.007 { 00:32:56.007 "name": "spare", 00:32:56.007 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:56.007 "is_configured": true, 00:32:56.007 "data_offset": 2048, 00:32:56.007 "data_size": 63488 00:32:56.007 }, 00:32:56.007 { 00:32:56.007 "name": "BaseBdev2", 00:32:56.007 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:56.007 "is_configured": true, 00:32:56.007 "data_offset": 2048, 00:32:56.007 "data_size": 63488 00:32:56.007 }, 00:32:56.007 { 00:32:56.007 "name": "BaseBdev3", 00:32:56.007 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:56.007 "is_configured": true, 00:32:56.007 "data_offset": 2048, 00:32:56.007 "data_size": 63488 00:32:56.007 } 00:32:56.007 ] 00:32:56.007 }' 00:32:56.007 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:56.266 [2024-07-25 18:59:56.787428] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:56.266 [2024-07-25 18:59:56.787776] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:56.266 [2024-07-25 18:59:56.787971] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:56.266 [2024-07-25 18:59:56.788023] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:56.266 [2024-07-25 18:59:56.788099] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:56.266 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:56.526 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.526 18:59:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.526 18:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:56.526 "name": "raid_bdev1", 00:32:56.526 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:56.526 "strip_size_kb": 64, 00:32:56.526 "state": "online", 00:32:56.526 "raid_level": "raid5f", 00:32:56.526 "superblock": true, 00:32:56.526 "num_base_bdevs": 3, 00:32:56.526 "num_base_bdevs_discovered": 2, 00:32:56.526 "num_base_bdevs_operational": 2, 00:32:56.526 "base_bdevs_list": [ 00:32:56.526 { 00:32:56.526 "name": null, 00:32:56.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.526 "is_configured": false, 00:32:56.526 "data_offset": 2048, 00:32:56.526 "data_size": 63488 00:32:56.526 }, 00:32:56.526 { 00:32:56.526 "name": "BaseBdev2", 00:32:56.526 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:56.526 "is_configured": true, 00:32:56.526 "data_offset": 2048, 00:32:56.526 "data_size": 63488 00:32:56.526 }, 00:32:56.526 { 00:32:56.526 "name": "BaseBdev3", 00:32:56.526 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:56.526 "is_configured": true, 00:32:56.526 "data_offset": 2048, 00:32:56.526 "data_size": 63488 00:32:56.526 } 00:32:56.526 ] 00:32:56.526 }' 00:32:56.526 18:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:56.526 18:59:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.464 18:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:57.464 [2024-07-25 18:59:57.959766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:57.464 [2024-07-25 18:59:57.960077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:57.464 [2024-07-25 18:59:57.960159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:57.464 [2024-07-25 18:59:57.960264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:57.464 [2024-07-25 18:59:57.960914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:57.464 [2024-07-25 18:59:57.961057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:57.464 [2024-07-25 18:59:57.961267] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:57.464 [2024-07-25 18:59:57.961386] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:57.464 [2024-07-25 18:59:57.961459] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:57.464 [2024-07-25 18:59:57.961599] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:57.464 [2024-07-25 18:59:57.979167] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047d80 00:32:57.464 spare 00:32:57.464 [2024-07-25 18:59:57.987760] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:57.464 18:59:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:58.844 "name": "raid_bdev1", 00:32:58.844 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:58.844 "strip_size_kb": 64, 00:32:58.844 "state": "online", 00:32:58.844 "raid_level": "raid5f", 00:32:58.844 "superblock": true, 00:32:58.844 "num_base_bdevs": 3, 00:32:58.844 "num_base_bdevs_discovered": 3, 00:32:58.844 "num_base_bdevs_operational": 3, 00:32:58.844 "process": { 00:32:58.844 "type": "rebuild", 00:32:58.844 "target": "spare", 00:32:58.844 "progress": { 00:32:58.844 "blocks": 22528, 00:32:58.844 "percent": 17 00:32:58.844 } 00:32:58.844 }, 00:32:58.844 "base_bdevs_list": [ 00:32:58.844 { 00:32:58.844 "name": "spare", 00:32:58.844 "uuid": "3d00c3ea-c477-5b59-a7f6-84926e480117", 00:32:58.844 "is_configured": true, 00:32:58.844 "data_offset": 2048, 00:32:58.844 "data_size": 63488 00:32:58.844 }, 00:32:58.844 { 00:32:58.844 "name": "BaseBdev2", 00:32:58.844 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:58.844 "is_configured": true, 00:32:58.844 "data_offset": 2048, 00:32:58.844 "data_size": 63488 00:32:58.844 }, 00:32:58.844 { 00:32:58.844 "name": "BaseBdev3", 00:32:58.844 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:58.844 "is_configured": true, 00:32:58.844 "data_offset": 2048, 00:32:58.844 "data_size": 63488 00:32:58.844 } 00:32:58.844 ] 00:32:58.844 }' 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:58.844 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:59.103 [2024-07-25 18:59:59.533416] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:59.103 [2024-07-25 18:59:59.601572] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:59.103 [2024-07-25 18:59:59.601758] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:59.103 [2024-07-25 18:59:59.601849] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:59.103 [2024-07-25 18:59:59.601934] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.103 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.361 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:59.361 "name": "raid_bdev1", 00:32:59.361 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:32:59.362 "strip_size_kb": 64, 00:32:59.362 "state": "online", 00:32:59.362 "raid_level": "raid5f", 00:32:59.362 "superblock": true, 00:32:59.362 "num_base_bdevs": 3, 00:32:59.362 "num_base_bdevs_discovered": 2, 00:32:59.362 "num_base_bdevs_operational": 2, 00:32:59.362 "base_bdevs_list": [ 00:32:59.362 { 00:32:59.362 "name": null, 00:32:59.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.362 "is_configured": false, 00:32:59.362 "data_offset": 2048, 00:32:59.362 "data_size": 63488 00:32:59.362 }, 00:32:59.362 { 00:32:59.362 "name": "BaseBdev2", 00:32:59.362 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:32:59.362 "is_configured": true, 00:32:59.362 "data_offset": 2048, 00:32:59.362 "data_size": 63488 00:32:59.362 }, 00:32:59.362 { 00:32:59.362 "name": "BaseBdev3", 00:32:59.362 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:32:59.362 "is_configured": true, 00:32:59.362 "data_offset": 2048, 00:32:59.362 "data_size": 63488 00:32:59.362 } 00:32:59.362 ] 00:32:59.362 }' 00:32:59.362 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:59.362 18:59:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.930 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:59.930 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:59.930 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:59.930 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:59.930 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:59.930 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.930 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.189 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:00.189 "name": "raid_bdev1", 00:33:00.189 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:33:00.189 "strip_size_kb": 64, 00:33:00.189 "state": "online", 00:33:00.189 "raid_level": "raid5f", 00:33:00.189 "superblock": true, 00:33:00.189 "num_base_bdevs": 3, 00:33:00.189 "num_base_bdevs_discovered": 2, 00:33:00.189 "num_base_bdevs_operational": 2, 00:33:00.189 "base_bdevs_list": [ 00:33:00.189 { 00:33:00.189 "name": null, 00:33:00.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.189 "is_configured": false, 00:33:00.189 "data_offset": 2048, 00:33:00.189 "data_size": 63488 00:33:00.189 }, 00:33:00.189 { 00:33:00.189 "name": "BaseBdev2", 00:33:00.189 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:33:00.189 "is_configured": true, 00:33:00.189 "data_offset": 2048, 00:33:00.189 "data_size": 63488 00:33:00.189 }, 00:33:00.189 { 00:33:00.189 "name": "BaseBdev3", 00:33:00.189 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:33:00.189 "is_configured": true, 00:33:00.189 "data_offset": 2048, 00:33:00.189 "data_size": 63488 00:33:00.189 } 00:33:00.189 ] 00:33:00.189 }' 00:33:00.189 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:00.189 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:00.189 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:00.189 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:00.189 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:00.449 19:00:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:00.708 [2024-07-25 19:00:01.204626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:00.708 [2024-07-25 19:00:01.204833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.708 [2024-07-25 19:00:01.204903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:33:00.708 [2024-07-25 19:00:01.205003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.708 [2024-07-25 19:00:01.205529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.708 [2024-07-25 19:00:01.205669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:00.708 [2024-07-25 19:00:01.205884] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:00.708 [2024-07-25 19:00:01.206002] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:00.708 [2024-07-25 19:00:01.206071] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:00.708 BaseBdev1 00:33:00.708 19:00:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.083 "name": "raid_bdev1", 00:33:02.083 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:33:02.083 "strip_size_kb": 64, 00:33:02.083 "state": "online", 00:33:02.083 "raid_level": "raid5f", 00:33:02.083 "superblock": true, 00:33:02.083 "num_base_bdevs": 3, 00:33:02.083 "num_base_bdevs_discovered": 2, 00:33:02.083 "num_base_bdevs_operational": 2, 00:33:02.083 "base_bdevs_list": [ 00:33:02.083 { 00:33:02.083 "name": null, 00:33:02.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.083 "is_configured": false, 00:33:02.083 "data_offset": 2048, 00:33:02.083 "data_size": 63488 00:33:02.083 }, 00:33:02.083 { 00:33:02.083 "name": "BaseBdev2", 00:33:02.083 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:33:02.083 "is_configured": true, 00:33:02.083 "data_offset": 2048, 00:33:02.083 "data_size": 63488 00:33:02.083 }, 00:33:02.083 { 00:33:02.083 "name": "BaseBdev3", 00:33:02.083 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:33:02.083 "is_configured": true, 00:33:02.083 "data_offset": 2048, 00:33:02.083 "data_size": 63488 00:33:02.083 } 00:33:02.083 ] 00:33:02.083 }' 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.083 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:02.650 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:02.650 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:02.650 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:02.650 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:02.650 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:02.650 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.650 19:00:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.650 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:02.650 "name": "raid_bdev1", 00:33:02.650 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:33:02.650 "strip_size_kb": 64, 00:33:02.650 "state": "online", 00:33:02.650 "raid_level": "raid5f", 00:33:02.650 "superblock": true, 00:33:02.651 "num_base_bdevs": 3, 00:33:02.651 "num_base_bdevs_discovered": 2, 00:33:02.651 "num_base_bdevs_operational": 2, 00:33:02.651 "base_bdevs_list": [ 00:33:02.651 { 00:33:02.651 "name": null, 00:33:02.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.651 "is_configured": false, 00:33:02.651 "data_offset": 2048, 00:33:02.651 "data_size": 63488 00:33:02.651 }, 00:33:02.651 { 00:33:02.651 "name": "BaseBdev2", 00:33:02.651 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:33:02.651 "is_configured": true, 00:33:02.651 "data_offset": 2048, 00:33:02.651 "data_size": 63488 00:33:02.651 }, 00:33:02.651 { 00:33:02.651 "name": "BaseBdev3", 00:33:02.651 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:33:02.651 "is_configured": true, 00:33:02.651 "data_offset": 2048, 00:33:02.651 "data_size": 63488 00:33:02.651 } 00:33:02.651 ] 00:33:02.651 }' 00:33:02.651 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:02.909 [2024-07-25 19:00:03.473123] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:02.909 [2024-07-25 19:00:03.473430] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:02.909 [2024-07-25 19:00:03.473533] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:02.909 request: 00:33:02.909 { 00:33:02.909 "base_bdev": "BaseBdev1", 00:33:02.909 "raid_bdev": "raid_bdev1", 00:33:02.909 "method": "bdev_raid_add_base_bdev", 00:33:02.909 "req_id": 1 00:33:02.909 } 00:33:02.909 Got JSON-RPC error response 00:33:02.909 response: 00:33:02.909 { 00:33:02.909 "code": -22, 00:33:02.909 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:02.909 } 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:33:02.909 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:03.167 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:03.167 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:03.167 19:00:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.104 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.363 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:04.363 "name": "raid_bdev1", 00:33:04.363 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:33:04.363 "strip_size_kb": 64, 00:33:04.363 "state": "online", 00:33:04.363 "raid_level": "raid5f", 00:33:04.363 "superblock": true, 00:33:04.363 "num_base_bdevs": 3, 00:33:04.363 "num_base_bdevs_discovered": 2, 00:33:04.363 "num_base_bdevs_operational": 2, 00:33:04.363 "base_bdevs_list": [ 00:33:04.363 { 00:33:04.363 "name": null, 00:33:04.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.363 "is_configured": false, 00:33:04.363 "data_offset": 2048, 00:33:04.363 "data_size": 63488 00:33:04.363 }, 00:33:04.363 { 00:33:04.363 "name": "BaseBdev2", 00:33:04.363 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:33:04.363 "is_configured": true, 00:33:04.363 "data_offset": 2048, 00:33:04.363 "data_size": 63488 00:33:04.363 }, 00:33:04.363 { 00:33:04.363 "name": "BaseBdev3", 00:33:04.363 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:33:04.363 "is_configured": true, 00:33:04.363 "data_offset": 2048, 00:33:04.363 "data_size": 63488 00:33:04.363 } 00:33:04.363 ] 00:33:04.363 }' 00:33:04.363 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:04.363 19:00:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:04.929 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:04.929 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:04.929 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:04.929 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:04.929 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:04.929 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.929 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.195 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:05.195 "name": "raid_bdev1", 00:33:05.196 "uuid": "8a112ec0-e08b-4591-9ea8-041d41c6b8b4", 00:33:05.196 "strip_size_kb": 64, 00:33:05.196 "state": "online", 00:33:05.196 "raid_level": "raid5f", 00:33:05.196 "superblock": true, 00:33:05.196 "num_base_bdevs": 3, 00:33:05.196 "num_base_bdevs_discovered": 2, 00:33:05.196 "num_base_bdevs_operational": 2, 00:33:05.196 "base_bdevs_list": [ 00:33:05.196 { 00:33:05.196 "name": null, 00:33:05.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.196 "is_configured": false, 00:33:05.196 "data_offset": 2048, 00:33:05.196 "data_size": 63488 00:33:05.196 }, 00:33:05.196 { 00:33:05.196 "name": "BaseBdev2", 00:33:05.196 "uuid": "77617fee-1d40-562b-8c2a-1a093fbfde58", 00:33:05.196 "is_configured": true, 00:33:05.196 "data_offset": 2048, 00:33:05.196 "data_size": 63488 00:33:05.196 }, 00:33:05.196 { 00:33:05.196 "name": "BaseBdev3", 00:33:05.196 "uuid": "2625688a-6664-5b2a-b753-18645fb64a06", 00:33:05.196 "is_configured": true, 00:33:05.196 "data_offset": 2048, 00:33:05.196 "data_size": 63488 00:33:05.196 } 00:33:05.196 ] 00:33:05.196 }' 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 152318 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 152318 ']' 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 152318 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 152318 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 152318' 00:33:05.196 killing process with pid 152318 00:33:05.196 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 152318 00:33:05.197 Received shutdown signal, test time was about 60.000000 seconds 00:33:05.197 00:33:05.197 Latency(us) 00:33:05.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.197 =================================================================================================================== 00:33:05.197 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:05.197 [2024-07-25 19:00:05.645358] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:05.197 19:00:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 152318 00:33:05.197 [2024-07-25 19:00:05.645616] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.197 [2024-07-25 19:00:05.645856] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:05.197 [2024-07-25 19:00:05.645900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:33:05.768 [2024-07-25 19:00:06.078950] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:07.143 ************************************ 00:33:07.143 END TEST raid5f_rebuild_test_sb 00:33:07.143 ************************************ 00:33:07.143 19:00:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:33:07.143 00:33:07.143 real 0m34.460s 00:33:07.143 user 0m51.661s 00:33:07.143 sys 0m4.959s 00:33:07.143 19:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.143 19:00:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:07.143 19:00:07 bdev_raid -- bdev/bdev_raid.sh@965 -- # for n in {3..4} 00:33:07.143 19:00:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:33:07.143 19:00:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:07.143 19:00:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:07.143 19:00:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:07.143 ************************************ 00:33:07.143 START TEST raid5f_state_function_test 00:33:07.143 ************************************ 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:07.143 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=153251 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 153251' 00:33:07.144 Process raid pid: 153251 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 153251 /var/tmp/spdk-raid.sock 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 153251 ']' 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:07.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.144 19:00:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.144 [2024-07-25 19:00:07.666643] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:07.144 [2024-07-25 19:00:07.666992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.402 [2024-07-25 19:00:07.827329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.660 [2024-07-25 19:00:08.039701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.660 [2024-07-25 19:00:08.235258] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:08.227 [2024-07-25 19:00:08.770957] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:08.227 [2024-07-25 19:00:08.771052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:08.227 [2024-07-25 19:00:08.771064] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:08.227 [2024-07-25 19:00:08.771105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:08.227 [2024-07-25 19:00:08.771113] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:08.227 [2024-07-25 19:00:08.771131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:08.227 [2024-07-25 19:00:08.771150] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:08.227 [2024-07-25 19:00:08.771173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:08.227 19:00:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.794 19:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:08.794 "name": "Existed_Raid", 00:33:08.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.794 "strip_size_kb": 64, 00:33:08.794 "state": "configuring", 00:33:08.794 "raid_level": "raid5f", 00:33:08.794 "superblock": false, 00:33:08.794 "num_base_bdevs": 4, 00:33:08.794 "num_base_bdevs_discovered": 0, 00:33:08.794 "num_base_bdevs_operational": 4, 00:33:08.794 "base_bdevs_list": [ 00:33:08.794 { 00:33:08.794 "name": "BaseBdev1", 00:33:08.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.794 "is_configured": false, 00:33:08.794 "data_offset": 0, 00:33:08.794 "data_size": 0 00:33:08.794 }, 00:33:08.794 { 00:33:08.794 "name": "BaseBdev2", 00:33:08.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.794 "is_configured": false, 00:33:08.794 "data_offset": 0, 00:33:08.794 "data_size": 0 00:33:08.794 }, 00:33:08.794 { 00:33:08.794 "name": "BaseBdev3", 00:33:08.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.794 "is_configured": false, 00:33:08.794 "data_offset": 0, 00:33:08.794 "data_size": 0 00:33:08.794 }, 00:33:08.794 { 00:33:08.794 "name": "BaseBdev4", 00:33:08.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.794 "is_configured": false, 00:33:08.794 "data_offset": 0, 00:33:08.794 "data_size": 0 00:33:08.794 } 00:33:08.794 ] 00:33:08.794 }' 00:33:08.794 19:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:08.794 19:00:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.360 19:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:09.360 [2024-07-25 19:00:09.911073] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:09.360 [2024-07-25 19:00:09.911112] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:33:09.360 19:00:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:09.619 [2024-07-25 19:00:10.127124] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:09.619 [2024-07-25 19:00:10.127176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:09.619 [2024-07-25 19:00:10.127184] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:09.619 [2024-07-25 19:00:10.127228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:09.619 [2024-07-25 19:00:10.127236] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:09.619 [2024-07-25 19:00:10.127271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:09.619 [2024-07-25 19:00:10.127278] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:09.619 [2024-07-25 19:00:10.127301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:09.619 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:09.877 [2024-07-25 19:00:10.429920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:09.877 BaseBdev1 00:33:09.877 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:09.877 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:09.877 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:09.877 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:09.877 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:09.877 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:09.877 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:10.136 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:10.395 [ 00:33:10.395 { 00:33:10.395 "name": "BaseBdev1", 00:33:10.395 "aliases": [ 00:33:10.395 "ead0cddd-545a-4d4c-8fad-74287f9476a1" 00:33:10.395 ], 00:33:10.395 "product_name": "Malloc disk", 00:33:10.395 "block_size": 512, 00:33:10.395 "num_blocks": 65536, 00:33:10.395 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:10.395 "assigned_rate_limits": { 00:33:10.395 "rw_ios_per_sec": 0, 00:33:10.395 "rw_mbytes_per_sec": 0, 00:33:10.395 "r_mbytes_per_sec": 0, 00:33:10.395 "w_mbytes_per_sec": 0 00:33:10.395 }, 00:33:10.395 "claimed": true, 00:33:10.395 "claim_type": "exclusive_write", 00:33:10.395 "zoned": false, 00:33:10.395 "supported_io_types": { 00:33:10.395 "read": true, 00:33:10.395 "write": true, 00:33:10.395 "unmap": true, 00:33:10.395 "flush": true, 00:33:10.395 "reset": true, 00:33:10.395 "nvme_admin": false, 00:33:10.395 "nvme_io": false, 00:33:10.395 "nvme_io_md": false, 00:33:10.395 "write_zeroes": true, 00:33:10.395 "zcopy": true, 00:33:10.395 "get_zone_info": false, 00:33:10.395 "zone_management": false, 00:33:10.395 "zone_append": false, 00:33:10.395 "compare": false, 00:33:10.395 "compare_and_write": false, 00:33:10.395 "abort": true, 00:33:10.395 "seek_hole": false, 00:33:10.395 "seek_data": false, 00:33:10.395 "copy": true, 00:33:10.395 "nvme_iov_md": false 00:33:10.395 }, 00:33:10.395 "memory_domains": [ 00:33:10.395 { 00:33:10.395 "dma_device_id": "system", 00:33:10.395 "dma_device_type": 1 00:33:10.395 }, 00:33:10.395 { 00:33:10.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.395 "dma_device_type": 2 00:33:10.395 } 00:33:10.395 ], 00:33:10.395 "driver_specific": {} 00:33:10.395 } 00:33:10.395 ] 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.395 19:00:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.654 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:10.654 "name": "Existed_Raid", 00:33:10.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.654 "strip_size_kb": 64, 00:33:10.654 "state": "configuring", 00:33:10.654 "raid_level": "raid5f", 00:33:10.654 "superblock": false, 00:33:10.654 "num_base_bdevs": 4, 00:33:10.654 "num_base_bdevs_discovered": 1, 00:33:10.654 "num_base_bdevs_operational": 4, 00:33:10.654 "base_bdevs_list": [ 00:33:10.654 { 00:33:10.654 "name": "BaseBdev1", 00:33:10.654 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:10.654 "is_configured": true, 00:33:10.654 "data_offset": 0, 00:33:10.654 "data_size": 65536 00:33:10.654 }, 00:33:10.654 { 00:33:10.654 "name": "BaseBdev2", 00:33:10.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.654 "is_configured": false, 00:33:10.654 "data_offset": 0, 00:33:10.654 "data_size": 0 00:33:10.654 }, 00:33:10.654 { 00:33:10.654 "name": "BaseBdev3", 00:33:10.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.654 "is_configured": false, 00:33:10.654 "data_offset": 0, 00:33:10.654 "data_size": 0 00:33:10.654 }, 00:33:10.654 { 00:33:10.654 "name": "BaseBdev4", 00:33:10.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.654 "is_configured": false, 00:33:10.654 "data_offset": 0, 00:33:10.654 "data_size": 0 00:33:10.654 } 00:33:10.654 ] 00:33:10.654 }' 00:33:10.654 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:10.654 19:00:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.220 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:11.220 [2024-07-25 19:00:11.794604] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:11.220 [2024-07-25 19:00:11.794643] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:11.479 [2024-07-25 19:00:11.954662] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:11.479 [2024-07-25 19:00:11.956824] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:11.479 [2024-07-25 19:00:11.956880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:11.479 [2024-07-25 19:00:11.956889] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:11.479 [2024-07-25 19:00:11.956914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:11.479 [2024-07-25 19:00:11.956921] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:11.479 [2024-07-25 19:00:11.956939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.479 19:00:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:11.738 19:00:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:11.738 "name": "Existed_Raid", 00:33:11.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.738 "strip_size_kb": 64, 00:33:11.738 "state": "configuring", 00:33:11.738 "raid_level": "raid5f", 00:33:11.738 "superblock": false, 00:33:11.738 "num_base_bdevs": 4, 00:33:11.738 "num_base_bdevs_discovered": 1, 00:33:11.738 "num_base_bdevs_operational": 4, 00:33:11.738 "base_bdevs_list": [ 00:33:11.738 { 00:33:11.738 "name": "BaseBdev1", 00:33:11.738 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:11.738 "is_configured": true, 00:33:11.738 "data_offset": 0, 00:33:11.738 "data_size": 65536 00:33:11.738 }, 00:33:11.738 { 00:33:11.738 "name": "BaseBdev2", 00:33:11.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.738 "is_configured": false, 00:33:11.738 "data_offset": 0, 00:33:11.738 "data_size": 0 00:33:11.738 }, 00:33:11.738 { 00:33:11.738 "name": "BaseBdev3", 00:33:11.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.738 "is_configured": false, 00:33:11.738 "data_offset": 0, 00:33:11.738 "data_size": 0 00:33:11.738 }, 00:33:11.738 { 00:33:11.738 "name": "BaseBdev4", 00:33:11.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.738 "is_configured": false, 00:33:11.738 "data_offset": 0, 00:33:11.738 "data_size": 0 00:33:11.738 } 00:33:11.738 ] 00:33:11.738 }' 00:33:11.738 19:00:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:11.738 19:00:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.324 19:00:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:12.583 [2024-07-25 19:00:13.022317] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:12.583 BaseBdev2 00:33:12.583 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:12.583 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:12.583 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:12.583 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:12.583 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:12.583 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:12.583 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:12.840 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:13.097 [ 00:33:13.097 { 00:33:13.097 "name": "BaseBdev2", 00:33:13.097 "aliases": [ 00:33:13.097 "792d672f-936a-4304-b8e6-0f6474d7f0ab" 00:33:13.097 ], 00:33:13.097 "product_name": "Malloc disk", 00:33:13.097 "block_size": 512, 00:33:13.097 "num_blocks": 65536, 00:33:13.097 "uuid": "792d672f-936a-4304-b8e6-0f6474d7f0ab", 00:33:13.097 "assigned_rate_limits": { 00:33:13.097 "rw_ios_per_sec": 0, 00:33:13.097 "rw_mbytes_per_sec": 0, 00:33:13.097 "r_mbytes_per_sec": 0, 00:33:13.097 "w_mbytes_per_sec": 0 00:33:13.097 }, 00:33:13.097 "claimed": true, 00:33:13.097 "claim_type": "exclusive_write", 00:33:13.097 "zoned": false, 00:33:13.097 "supported_io_types": { 00:33:13.097 "read": true, 00:33:13.097 "write": true, 00:33:13.097 "unmap": true, 00:33:13.097 "flush": true, 00:33:13.097 "reset": true, 00:33:13.097 "nvme_admin": false, 00:33:13.097 "nvme_io": false, 00:33:13.097 "nvme_io_md": false, 00:33:13.097 "write_zeroes": true, 00:33:13.097 "zcopy": true, 00:33:13.097 "get_zone_info": false, 00:33:13.097 "zone_management": false, 00:33:13.097 "zone_append": false, 00:33:13.097 "compare": false, 00:33:13.097 "compare_and_write": false, 00:33:13.097 "abort": true, 00:33:13.097 "seek_hole": false, 00:33:13.097 "seek_data": false, 00:33:13.097 "copy": true, 00:33:13.097 "nvme_iov_md": false 00:33:13.097 }, 00:33:13.097 "memory_domains": [ 00:33:13.097 { 00:33:13.097 "dma_device_id": "system", 00:33:13.097 "dma_device_type": 1 00:33:13.097 }, 00:33:13.097 { 00:33:13.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:13.097 "dma_device_type": 2 00:33:13.097 } 00:33:13.097 ], 00:33:13.097 "driver_specific": {} 00:33:13.097 } 00:33:13.097 ] 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.097 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.355 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.355 "name": "Existed_Raid", 00:33:13.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.355 "strip_size_kb": 64, 00:33:13.355 "state": "configuring", 00:33:13.355 "raid_level": "raid5f", 00:33:13.355 "superblock": false, 00:33:13.355 "num_base_bdevs": 4, 00:33:13.355 "num_base_bdevs_discovered": 2, 00:33:13.355 "num_base_bdevs_operational": 4, 00:33:13.355 "base_bdevs_list": [ 00:33:13.355 { 00:33:13.355 "name": "BaseBdev1", 00:33:13.355 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:13.355 "is_configured": true, 00:33:13.355 "data_offset": 0, 00:33:13.355 "data_size": 65536 00:33:13.355 }, 00:33:13.355 { 00:33:13.355 "name": "BaseBdev2", 00:33:13.355 "uuid": "792d672f-936a-4304-b8e6-0f6474d7f0ab", 00:33:13.355 "is_configured": true, 00:33:13.355 "data_offset": 0, 00:33:13.355 "data_size": 65536 00:33:13.355 }, 00:33:13.355 { 00:33:13.355 "name": "BaseBdev3", 00:33:13.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.355 "is_configured": false, 00:33:13.355 "data_offset": 0, 00:33:13.355 "data_size": 0 00:33:13.355 }, 00:33:13.355 { 00:33:13.355 "name": "BaseBdev4", 00:33:13.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.355 "is_configured": false, 00:33:13.355 "data_offset": 0, 00:33:13.355 "data_size": 0 00:33:13.355 } 00:33:13.355 ] 00:33:13.355 }' 00:33:13.355 19:00:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.355 19:00:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:13.920 19:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:14.178 [2024-07-25 19:00:14.570843] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:14.178 BaseBdev3 00:33:14.178 19:00:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:33:14.178 19:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:14.178 19:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:14.178 19:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:14.178 19:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:14.178 19:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:14.178 19:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:14.435 19:00:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:14.435 [ 00:33:14.435 { 00:33:14.435 "name": "BaseBdev3", 00:33:14.435 "aliases": [ 00:33:14.435 "b36e2b69-5cc5-455b-bf99-dc10c719cdc5" 00:33:14.435 ], 00:33:14.435 "product_name": "Malloc disk", 00:33:14.435 "block_size": 512, 00:33:14.435 "num_blocks": 65536, 00:33:14.435 "uuid": "b36e2b69-5cc5-455b-bf99-dc10c719cdc5", 00:33:14.435 "assigned_rate_limits": { 00:33:14.435 "rw_ios_per_sec": 0, 00:33:14.435 "rw_mbytes_per_sec": 0, 00:33:14.435 "r_mbytes_per_sec": 0, 00:33:14.435 "w_mbytes_per_sec": 0 00:33:14.435 }, 00:33:14.435 "claimed": true, 00:33:14.435 "claim_type": "exclusive_write", 00:33:14.435 "zoned": false, 00:33:14.435 "supported_io_types": { 00:33:14.435 "read": true, 00:33:14.435 "write": true, 00:33:14.435 "unmap": true, 00:33:14.435 "flush": true, 00:33:14.435 "reset": true, 00:33:14.435 "nvme_admin": false, 00:33:14.435 "nvme_io": false, 00:33:14.435 "nvme_io_md": false, 00:33:14.435 "write_zeroes": true, 00:33:14.435 "zcopy": true, 00:33:14.435 "get_zone_info": false, 00:33:14.435 "zone_management": false, 00:33:14.435 "zone_append": false, 00:33:14.435 "compare": false, 00:33:14.435 "compare_and_write": false, 00:33:14.435 "abort": true, 00:33:14.435 "seek_hole": false, 00:33:14.435 "seek_data": false, 00:33:14.435 "copy": true, 00:33:14.435 "nvme_iov_md": false 00:33:14.435 }, 00:33:14.435 "memory_domains": [ 00:33:14.435 { 00:33:14.435 "dma_device_id": "system", 00:33:14.435 "dma_device_type": 1 00:33:14.435 }, 00:33:14.435 { 00:33:14.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.435 "dma_device_type": 2 00:33:14.435 } 00:33:14.435 ], 00:33:14.435 "driver_specific": {} 00:33:14.435 } 00:33:14.435 ] 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:14.435 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:14.693 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.693 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.693 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:14.693 "name": "Existed_Raid", 00:33:14.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.693 "strip_size_kb": 64, 00:33:14.693 "state": "configuring", 00:33:14.693 "raid_level": "raid5f", 00:33:14.693 "superblock": false, 00:33:14.693 "num_base_bdevs": 4, 00:33:14.693 "num_base_bdevs_discovered": 3, 00:33:14.693 "num_base_bdevs_operational": 4, 00:33:14.693 "base_bdevs_list": [ 00:33:14.693 { 00:33:14.693 "name": "BaseBdev1", 00:33:14.693 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:14.693 "is_configured": true, 00:33:14.693 "data_offset": 0, 00:33:14.693 "data_size": 65536 00:33:14.693 }, 00:33:14.693 { 00:33:14.693 "name": "BaseBdev2", 00:33:14.693 "uuid": "792d672f-936a-4304-b8e6-0f6474d7f0ab", 00:33:14.693 "is_configured": true, 00:33:14.693 "data_offset": 0, 00:33:14.693 "data_size": 65536 00:33:14.693 }, 00:33:14.693 { 00:33:14.693 "name": "BaseBdev3", 00:33:14.693 "uuid": "b36e2b69-5cc5-455b-bf99-dc10c719cdc5", 00:33:14.693 "is_configured": true, 00:33:14.693 "data_offset": 0, 00:33:14.693 "data_size": 65536 00:33:14.693 }, 00:33:14.693 { 00:33:14.693 "name": "BaseBdev4", 00:33:14.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.693 "is_configured": false, 00:33:14.693 "data_offset": 0, 00:33:14.693 "data_size": 0 00:33:14.693 } 00:33:14.693 ] 00:33:14.693 }' 00:33:14.693 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:14.693 19:00:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.259 19:00:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:15.518 [2024-07-25 19:00:16.007855] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:15.518 [2024-07-25 19:00:16.007932] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:33:15.518 [2024-07-25 19:00:16.007942] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:15.518 [2024-07-25 19:00:16.008047] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:33:15.518 [2024-07-25 19:00:16.013495] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:33:15.518 [2024-07-25 19:00:16.013518] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:33:15.518 [2024-07-25 19:00:16.013790] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:15.518 BaseBdev4 00:33:15.518 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:33:15.518 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:15.518 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:15.518 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:15.518 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:15.518 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:15.518 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:15.777 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:16.036 [ 00:33:16.036 { 00:33:16.036 "name": "BaseBdev4", 00:33:16.036 "aliases": [ 00:33:16.036 "9b244200-34e7-4e9b-9148-426dc13adb75" 00:33:16.036 ], 00:33:16.036 "product_name": "Malloc disk", 00:33:16.036 "block_size": 512, 00:33:16.036 "num_blocks": 65536, 00:33:16.036 "uuid": "9b244200-34e7-4e9b-9148-426dc13adb75", 00:33:16.036 "assigned_rate_limits": { 00:33:16.036 "rw_ios_per_sec": 0, 00:33:16.036 "rw_mbytes_per_sec": 0, 00:33:16.036 "r_mbytes_per_sec": 0, 00:33:16.036 "w_mbytes_per_sec": 0 00:33:16.036 }, 00:33:16.036 "claimed": true, 00:33:16.036 "claim_type": "exclusive_write", 00:33:16.036 "zoned": false, 00:33:16.036 "supported_io_types": { 00:33:16.036 "read": true, 00:33:16.036 "write": true, 00:33:16.036 "unmap": true, 00:33:16.036 "flush": true, 00:33:16.036 "reset": true, 00:33:16.036 "nvme_admin": false, 00:33:16.036 "nvme_io": false, 00:33:16.036 "nvme_io_md": false, 00:33:16.036 "write_zeroes": true, 00:33:16.036 "zcopy": true, 00:33:16.036 "get_zone_info": false, 00:33:16.036 "zone_management": false, 00:33:16.036 "zone_append": false, 00:33:16.036 "compare": false, 00:33:16.036 "compare_and_write": false, 00:33:16.036 "abort": true, 00:33:16.036 "seek_hole": false, 00:33:16.036 "seek_data": false, 00:33:16.036 "copy": true, 00:33:16.036 "nvme_iov_md": false 00:33:16.036 }, 00:33:16.036 "memory_domains": [ 00:33:16.036 { 00:33:16.036 "dma_device_id": "system", 00:33:16.036 "dma_device_type": 1 00:33:16.036 }, 00:33:16.036 { 00:33:16.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.036 "dma_device_type": 2 00:33:16.036 } 00:33:16.036 ], 00:33:16.036 "driver_specific": {} 00:33:16.036 } 00:33:16.036 ] 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.036 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.293 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:16.293 "name": "Existed_Raid", 00:33:16.293 "uuid": "2e452faf-0189-4a8f-be11-69e0fbd44be3", 00:33:16.293 "strip_size_kb": 64, 00:33:16.293 "state": "online", 00:33:16.293 "raid_level": "raid5f", 00:33:16.293 "superblock": false, 00:33:16.293 "num_base_bdevs": 4, 00:33:16.293 "num_base_bdevs_discovered": 4, 00:33:16.294 "num_base_bdevs_operational": 4, 00:33:16.294 "base_bdevs_list": [ 00:33:16.294 { 00:33:16.294 "name": "BaseBdev1", 00:33:16.294 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:16.294 "is_configured": true, 00:33:16.294 "data_offset": 0, 00:33:16.294 "data_size": 65536 00:33:16.294 }, 00:33:16.294 { 00:33:16.294 "name": "BaseBdev2", 00:33:16.294 "uuid": "792d672f-936a-4304-b8e6-0f6474d7f0ab", 00:33:16.294 "is_configured": true, 00:33:16.294 "data_offset": 0, 00:33:16.294 "data_size": 65536 00:33:16.294 }, 00:33:16.294 { 00:33:16.294 "name": "BaseBdev3", 00:33:16.294 "uuid": "b36e2b69-5cc5-455b-bf99-dc10c719cdc5", 00:33:16.294 "is_configured": true, 00:33:16.294 "data_offset": 0, 00:33:16.294 "data_size": 65536 00:33:16.294 }, 00:33:16.294 { 00:33:16.294 "name": "BaseBdev4", 00:33:16.294 "uuid": "9b244200-34e7-4e9b-9148-426dc13adb75", 00:33:16.294 "is_configured": true, 00:33:16.294 "data_offset": 0, 00:33:16.294 "data_size": 65536 00:33:16.294 } 00:33:16.294 ] 00:33:16.294 }' 00:33:16.294 19:00:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:16.294 19:00:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:16.859 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:17.117 [2024-07-25 19:00:17.558349] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:17.117 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:17.117 "name": "Existed_Raid", 00:33:17.117 "aliases": [ 00:33:17.117 "2e452faf-0189-4a8f-be11-69e0fbd44be3" 00:33:17.117 ], 00:33:17.117 "product_name": "Raid Volume", 00:33:17.117 "block_size": 512, 00:33:17.117 "num_blocks": 196608, 00:33:17.117 "uuid": "2e452faf-0189-4a8f-be11-69e0fbd44be3", 00:33:17.117 "assigned_rate_limits": { 00:33:17.117 "rw_ios_per_sec": 0, 00:33:17.117 "rw_mbytes_per_sec": 0, 00:33:17.117 "r_mbytes_per_sec": 0, 00:33:17.117 "w_mbytes_per_sec": 0 00:33:17.117 }, 00:33:17.117 "claimed": false, 00:33:17.117 "zoned": false, 00:33:17.117 "supported_io_types": { 00:33:17.117 "read": true, 00:33:17.117 "write": true, 00:33:17.117 "unmap": false, 00:33:17.117 "flush": false, 00:33:17.117 "reset": true, 00:33:17.117 "nvme_admin": false, 00:33:17.117 "nvme_io": false, 00:33:17.117 "nvme_io_md": false, 00:33:17.117 "write_zeroes": true, 00:33:17.117 "zcopy": false, 00:33:17.117 "get_zone_info": false, 00:33:17.117 "zone_management": false, 00:33:17.117 "zone_append": false, 00:33:17.117 "compare": false, 00:33:17.117 "compare_and_write": false, 00:33:17.117 "abort": false, 00:33:17.117 "seek_hole": false, 00:33:17.117 "seek_data": false, 00:33:17.117 "copy": false, 00:33:17.117 "nvme_iov_md": false 00:33:17.117 }, 00:33:17.117 "driver_specific": { 00:33:17.117 "raid": { 00:33:17.117 "uuid": "2e452faf-0189-4a8f-be11-69e0fbd44be3", 00:33:17.117 "strip_size_kb": 64, 00:33:17.117 "state": "online", 00:33:17.117 "raid_level": "raid5f", 00:33:17.117 "superblock": false, 00:33:17.117 "num_base_bdevs": 4, 00:33:17.117 "num_base_bdevs_discovered": 4, 00:33:17.118 "num_base_bdevs_operational": 4, 00:33:17.118 "base_bdevs_list": [ 00:33:17.118 { 00:33:17.118 "name": "BaseBdev1", 00:33:17.118 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:17.118 "is_configured": true, 00:33:17.118 "data_offset": 0, 00:33:17.118 "data_size": 65536 00:33:17.118 }, 00:33:17.118 { 00:33:17.118 "name": "BaseBdev2", 00:33:17.118 "uuid": "792d672f-936a-4304-b8e6-0f6474d7f0ab", 00:33:17.118 "is_configured": true, 00:33:17.118 "data_offset": 0, 00:33:17.118 "data_size": 65536 00:33:17.118 }, 00:33:17.118 { 00:33:17.118 "name": "BaseBdev3", 00:33:17.118 "uuid": "b36e2b69-5cc5-455b-bf99-dc10c719cdc5", 00:33:17.118 "is_configured": true, 00:33:17.118 "data_offset": 0, 00:33:17.118 "data_size": 65536 00:33:17.118 }, 00:33:17.118 { 00:33:17.118 "name": "BaseBdev4", 00:33:17.118 "uuid": "9b244200-34e7-4e9b-9148-426dc13adb75", 00:33:17.118 "is_configured": true, 00:33:17.118 "data_offset": 0, 00:33:17.118 "data_size": 65536 00:33:17.118 } 00:33:17.118 ] 00:33:17.118 } 00:33:17.118 } 00:33:17.118 }' 00:33:17.118 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:17.118 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:17.118 BaseBdev2 00:33:17.118 BaseBdev3 00:33:17.118 BaseBdev4' 00:33:17.118 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:17.118 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:17.118 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:17.377 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:17.377 "name": "BaseBdev1", 00:33:17.377 "aliases": [ 00:33:17.377 "ead0cddd-545a-4d4c-8fad-74287f9476a1" 00:33:17.377 ], 00:33:17.377 "product_name": "Malloc disk", 00:33:17.377 "block_size": 512, 00:33:17.377 "num_blocks": 65536, 00:33:17.377 "uuid": "ead0cddd-545a-4d4c-8fad-74287f9476a1", 00:33:17.377 "assigned_rate_limits": { 00:33:17.377 "rw_ios_per_sec": 0, 00:33:17.377 "rw_mbytes_per_sec": 0, 00:33:17.377 "r_mbytes_per_sec": 0, 00:33:17.377 "w_mbytes_per_sec": 0 00:33:17.377 }, 00:33:17.377 "claimed": true, 00:33:17.377 "claim_type": "exclusive_write", 00:33:17.377 "zoned": false, 00:33:17.377 "supported_io_types": { 00:33:17.377 "read": true, 00:33:17.377 "write": true, 00:33:17.377 "unmap": true, 00:33:17.377 "flush": true, 00:33:17.377 "reset": true, 00:33:17.377 "nvme_admin": false, 00:33:17.377 "nvme_io": false, 00:33:17.377 "nvme_io_md": false, 00:33:17.377 "write_zeroes": true, 00:33:17.377 "zcopy": true, 00:33:17.377 "get_zone_info": false, 00:33:17.377 "zone_management": false, 00:33:17.377 "zone_append": false, 00:33:17.377 "compare": false, 00:33:17.377 "compare_and_write": false, 00:33:17.377 "abort": true, 00:33:17.377 "seek_hole": false, 00:33:17.377 "seek_data": false, 00:33:17.377 "copy": true, 00:33:17.377 "nvme_iov_md": false 00:33:17.377 }, 00:33:17.377 "memory_domains": [ 00:33:17.377 { 00:33:17.377 "dma_device_id": "system", 00:33:17.377 "dma_device_type": 1 00:33:17.377 }, 00:33:17.377 { 00:33:17.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.377 "dma_device_type": 2 00:33:17.377 } 00:33:17.377 ], 00:33:17.377 "driver_specific": {} 00:33:17.377 }' 00:33:17.377 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:17.377 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:17.377 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:17.377 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:17.377 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:17.636 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:17.636 19:00:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:17.636 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:17.895 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:17.895 "name": "BaseBdev2", 00:33:17.895 "aliases": [ 00:33:17.895 "792d672f-936a-4304-b8e6-0f6474d7f0ab" 00:33:17.895 ], 00:33:17.895 "product_name": "Malloc disk", 00:33:17.895 "block_size": 512, 00:33:17.895 "num_blocks": 65536, 00:33:17.895 "uuid": "792d672f-936a-4304-b8e6-0f6474d7f0ab", 00:33:17.895 "assigned_rate_limits": { 00:33:17.895 "rw_ios_per_sec": 0, 00:33:17.895 "rw_mbytes_per_sec": 0, 00:33:17.895 "r_mbytes_per_sec": 0, 00:33:17.895 "w_mbytes_per_sec": 0 00:33:17.895 }, 00:33:17.895 "claimed": true, 00:33:17.895 "claim_type": "exclusive_write", 00:33:17.895 "zoned": false, 00:33:17.895 "supported_io_types": { 00:33:17.895 "read": true, 00:33:17.895 "write": true, 00:33:17.895 "unmap": true, 00:33:17.895 "flush": true, 00:33:17.895 "reset": true, 00:33:17.895 "nvme_admin": false, 00:33:17.895 "nvme_io": false, 00:33:17.895 "nvme_io_md": false, 00:33:17.895 "write_zeroes": true, 00:33:17.895 "zcopy": true, 00:33:17.895 "get_zone_info": false, 00:33:17.895 "zone_management": false, 00:33:17.895 "zone_append": false, 00:33:17.895 "compare": false, 00:33:17.895 "compare_and_write": false, 00:33:17.895 "abort": true, 00:33:17.895 "seek_hole": false, 00:33:17.895 "seek_data": false, 00:33:17.895 "copy": true, 00:33:17.895 "nvme_iov_md": false 00:33:17.895 }, 00:33:17.895 "memory_domains": [ 00:33:17.895 { 00:33:17.895 "dma_device_id": "system", 00:33:17.895 "dma_device_type": 1 00:33:17.895 }, 00:33:17.895 { 00:33:17.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.895 "dma_device_type": 2 00:33:17.895 } 00:33:17.895 ], 00:33:17.895 "driver_specific": {} 00:33:17.895 }' 00:33:17.895 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:18.155 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:18.414 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:18.414 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:18.414 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:18.414 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:18.414 19:00:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:18.677 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:18.677 "name": "BaseBdev3", 00:33:18.677 "aliases": [ 00:33:18.677 "b36e2b69-5cc5-455b-bf99-dc10c719cdc5" 00:33:18.677 ], 00:33:18.678 "product_name": "Malloc disk", 00:33:18.678 "block_size": 512, 00:33:18.678 "num_blocks": 65536, 00:33:18.678 "uuid": "b36e2b69-5cc5-455b-bf99-dc10c719cdc5", 00:33:18.678 "assigned_rate_limits": { 00:33:18.678 "rw_ios_per_sec": 0, 00:33:18.678 "rw_mbytes_per_sec": 0, 00:33:18.678 "r_mbytes_per_sec": 0, 00:33:18.678 "w_mbytes_per_sec": 0 00:33:18.678 }, 00:33:18.678 "claimed": true, 00:33:18.678 "claim_type": "exclusive_write", 00:33:18.678 "zoned": false, 00:33:18.678 "supported_io_types": { 00:33:18.678 "read": true, 00:33:18.678 "write": true, 00:33:18.678 "unmap": true, 00:33:18.678 "flush": true, 00:33:18.678 "reset": true, 00:33:18.678 "nvme_admin": false, 00:33:18.678 "nvme_io": false, 00:33:18.678 "nvme_io_md": false, 00:33:18.678 "write_zeroes": true, 00:33:18.678 "zcopy": true, 00:33:18.678 "get_zone_info": false, 00:33:18.678 "zone_management": false, 00:33:18.678 "zone_append": false, 00:33:18.678 "compare": false, 00:33:18.678 "compare_and_write": false, 00:33:18.678 "abort": true, 00:33:18.678 "seek_hole": false, 00:33:18.678 "seek_data": false, 00:33:18.678 "copy": true, 00:33:18.678 "nvme_iov_md": false 00:33:18.678 }, 00:33:18.678 "memory_domains": [ 00:33:18.678 { 00:33:18.678 "dma_device_id": "system", 00:33:18.678 "dma_device_type": 1 00:33:18.678 }, 00:33:18.678 { 00:33:18.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:18.678 "dma_device_type": 2 00:33:18.678 } 00:33:18.678 ], 00:33:18.678 "driver_specific": {} 00:33:18.678 }' 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:18.678 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:18.997 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:18.997 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:18.997 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:18.997 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:18.997 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:18.998 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:18.998 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:19.289 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:19.289 "name": "BaseBdev4", 00:33:19.289 "aliases": [ 00:33:19.289 "9b244200-34e7-4e9b-9148-426dc13adb75" 00:33:19.289 ], 00:33:19.289 "product_name": "Malloc disk", 00:33:19.289 "block_size": 512, 00:33:19.289 "num_blocks": 65536, 00:33:19.289 "uuid": "9b244200-34e7-4e9b-9148-426dc13adb75", 00:33:19.289 "assigned_rate_limits": { 00:33:19.289 "rw_ios_per_sec": 0, 00:33:19.290 "rw_mbytes_per_sec": 0, 00:33:19.290 "r_mbytes_per_sec": 0, 00:33:19.290 "w_mbytes_per_sec": 0 00:33:19.290 }, 00:33:19.290 "claimed": true, 00:33:19.290 "claim_type": "exclusive_write", 00:33:19.290 "zoned": false, 00:33:19.290 "supported_io_types": { 00:33:19.290 "read": true, 00:33:19.290 "write": true, 00:33:19.290 "unmap": true, 00:33:19.290 "flush": true, 00:33:19.290 "reset": true, 00:33:19.290 "nvme_admin": false, 00:33:19.290 "nvme_io": false, 00:33:19.290 "nvme_io_md": false, 00:33:19.290 "write_zeroes": true, 00:33:19.290 "zcopy": true, 00:33:19.290 "get_zone_info": false, 00:33:19.290 "zone_management": false, 00:33:19.290 "zone_append": false, 00:33:19.290 "compare": false, 00:33:19.290 "compare_and_write": false, 00:33:19.290 "abort": true, 00:33:19.290 "seek_hole": false, 00:33:19.290 "seek_data": false, 00:33:19.290 "copy": true, 00:33:19.290 "nvme_iov_md": false 00:33:19.290 }, 00:33:19.290 "memory_domains": [ 00:33:19.290 { 00:33:19.290 "dma_device_id": "system", 00:33:19.290 "dma_device_type": 1 00:33:19.290 }, 00:33:19.290 { 00:33:19.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.290 "dma_device_type": 2 00:33:19.290 } 00:33:19.290 ], 00:33:19.290 "driver_specific": {} 00:33:19.290 }' 00:33:19.290 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.290 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.290 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:19.290 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.290 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.290 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:19.290 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.549 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.549 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:19.549 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:19.549 19:00:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:19.549 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:19.549 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:19.809 [2024-07-25 19:00:20.268571] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.809 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.068 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:20.068 "name": "Existed_Raid", 00:33:20.068 "uuid": "2e452faf-0189-4a8f-be11-69e0fbd44be3", 00:33:20.068 "strip_size_kb": 64, 00:33:20.068 "state": "online", 00:33:20.068 "raid_level": "raid5f", 00:33:20.068 "superblock": false, 00:33:20.068 "num_base_bdevs": 4, 00:33:20.068 "num_base_bdevs_discovered": 3, 00:33:20.068 "num_base_bdevs_operational": 3, 00:33:20.068 "base_bdevs_list": [ 00:33:20.068 { 00:33:20.068 "name": null, 00:33:20.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.068 "is_configured": false, 00:33:20.068 "data_offset": 0, 00:33:20.068 "data_size": 65536 00:33:20.068 }, 00:33:20.068 { 00:33:20.068 "name": "BaseBdev2", 00:33:20.068 "uuid": "792d672f-936a-4304-b8e6-0f6474d7f0ab", 00:33:20.068 "is_configured": true, 00:33:20.068 "data_offset": 0, 00:33:20.068 "data_size": 65536 00:33:20.068 }, 00:33:20.068 { 00:33:20.068 "name": "BaseBdev3", 00:33:20.068 "uuid": "b36e2b69-5cc5-455b-bf99-dc10c719cdc5", 00:33:20.068 "is_configured": true, 00:33:20.068 "data_offset": 0, 00:33:20.068 "data_size": 65536 00:33:20.068 }, 00:33:20.068 { 00:33:20.068 "name": "BaseBdev4", 00:33:20.068 "uuid": "9b244200-34e7-4e9b-9148-426dc13adb75", 00:33:20.068 "is_configured": true, 00:33:20.068 "data_offset": 0, 00:33:20.068 "data_size": 65536 00:33:20.068 } 00:33:20.068 ] 00:33:20.068 }' 00:33:20.068 19:00:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:20.068 19:00:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.634 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:20.634 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:20.634 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.634 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:20.892 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:20.892 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:20.892 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:21.151 [2024-07-25 19:00:21.674258] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:21.151 [2024-07-25 19:00:21.674377] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:21.409 [2024-07-25 19:00:21.751190] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:21.409 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:21.409 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:21.409 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.409 19:00:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:21.667 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:21.667 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:21.667 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:21.667 [2024-07-25 19:00:22.247351] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:21.926 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:21.926 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:21.926 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.926 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:22.185 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:22.185 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:22.185 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:33:22.444 [2024-07-25 19:00:22.790379] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:22.444 [2024-07-25 19:00:22.790478] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:33:22.444 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:22.444 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:22.444 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.444 19:00:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:22.703 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:22.703 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:22.703 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:33:22.703 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:33:22.703 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:22.703 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:22.968 BaseBdev2 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:22.968 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:23.227 [ 00:33:23.227 { 00:33:23.227 "name": "BaseBdev2", 00:33:23.227 "aliases": [ 00:33:23.227 "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9" 00:33:23.227 ], 00:33:23.227 "product_name": "Malloc disk", 00:33:23.227 "block_size": 512, 00:33:23.227 "num_blocks": 65536, 00:33:23.227 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:23.227 "assigned_rate_limits": { 00:33:23.227 "rw_ios_per_sec": 0, 00:33:23.227 "rw_mbytes_per_sec": 0, 00:33:23.227 "r_mbytes_per_sec": 0, 00:33:23.227 "w_mbytes_per_sec": 0 00:33:23.228 }, 00:33:23.228 "claimed": false, 00:33:23.228 "zoned": false, 00:33:23.228 "supported_io_types": { 00:33:23.228 "read": true, 00:33:23.228 "write": true, 00:33:23.228 "unmap": true, 00:33:23.228 "flush": true, 00:33:23.228 "reset": true, 00:33:23.228 "nvme_admin": false, 00:33:23.228 "nvme_io": false, 00:33:23.228 "nvme_io_md": false, 00:33:23.228 "write_zeroes": true, 00:33:23.228 "zcopy": true, 00:33:23.228 "get_zone_info": false, 00:33:23.228 "zone_management": false, 00:33:23.228 "zone_append": false, 00:33:23.228 "compare": false, 00:33:23.228 "compare_and_write": false, 00:33:23.228 "abort": true, 00:33:23.228 "seek_hole": false, 00:33:23.228 "seek_data": false, 00:33:23.228 "copy": true, 00:33:23.228 "nvme_iov_md": false 00:33:23.228 }, 00:33:23.228 "memory_domains": [ 00:33:23.228 { 00:33:23.228 "dma_device_id": "system", 00:33:23.228 "dma_device_type": 1 00:33:23.228 }, 00:33:23.228 { 00:33:23.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:23.228 "dma_device_type": 2 00:33:23.228 } 00:33:23.228 ], 00:33:23.228 "driver_specific": {} 00:33:23.228 } 00:33:23.228 ] 00:33:23.228 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:23.228 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:23.228 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:23.228 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:23.486 BaseBdev3 00:33:23.486 19:00:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:33:23.486 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:23.486 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:23.486 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:23.486 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:23.486 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:23.486 19:00:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:23.745 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:23.745 [ 00:33:23.745 { 00:33:23.745 "name": "BaseBdev3", 00:33:23.745 "aliases": [ 00:33:23.745 "4aa086c0-f6b0-466b-b0e7-9e3746746bc7" 00:33:23.745 ], 00:33:23.745 "product_name": "Malloc disk", 00:33:23.745 "block_size": 512, 00:33:23.745 "num_blocks": 65536, 00:33:23.745 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:23.745 "assigned_rate_limits": { 00:33:23.745 "rw_ios_per_sec": 0, 00:33:23.745 "rw_mbytes_per_sec": 0, 00:33:23.745 "r_mbytes_per_sec": 0, 00:33:23.745 "w_mbytes_per_sec": 0 00:33:23.745 }, 00:33:23.745 "claimed": false, 00:33:23.745 "zoned": false, 00:33:23.745 "supported_io_types": { 00:33:23.745 "read": true, 00:33:23.745 "write": true, 00:33:23.745 "unmap": true, 00:33:23.745 "flush": true, 00:33:23.745 "reset": true, 00:33:23.745 "nvme_admin": false, 00:33:23.745 "nvme_io": false, 00:33:23.745 "nvme_io_md": false, 00:33:23.745 "write_zeroes": true, 00:33:23.745 "zcopy": true, 00:33:23.745 "get_zone_info": false, 00:33:23.745 "zone_management": false, 00:33:23.745 "zone_append": false, 00:33:23.745 "compare": false, 00:33:23.745 "compare_and_write": false, 00:33:23.745 "abort": true, 00:33:23.745 "seek_hole": false, 00:33:23.745 "seek_data": false, 00:33:23.745 "copy": true, 00:33:23.745 "nvme_iov_md": false 00:33:23.745 }, 00:33:23.745 "memory_domains": [ 00:33:23.745 { 00:33:23.745 "dma_device_id": "system", 00:33:23.745 "dma_device_type": 1 00:33:23.745 }, 00:33:23.745 { 00:33:23.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:23.745 "dma_device_type": 2 00:33:23.745 } 00:33:23.745 ], 00:33:23.745 "driver_specific": {} 00:33:23.745 } 00:33:23.745 ] 00:33:23.745 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:23.745 19:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:23.745 19:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:23.745 19:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:24.003 BaseBdev4 00:33:24.003 19:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:33:24.003 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:24.003 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:24.003 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:24.003 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:24.003 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:24.003 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:24.261 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:24.520 [ 00:33:24.520 { 00:33:24.520 "name": "BaseBdev4", 00:33:24.520 "aliases": [ 00:33:24.520 "3f775c34-2753-40a3-b44a-433970a4fd5c" 00:33:24.520 ], 00:33:24.520 "product_name": "Malloc disk", 00:33:24.520 "block_size": 512, 00:33:24.520 "num_blocks": 65536, 00:33:24.520 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:24.520 "assigned_rate_limits": { 00:33:24.520 "rw_ios_per_sec": 0, 00:33:24.520 "rw_mbytes_per_sec": 0, 00:33:24.520 "r_mbytes_per_sec": 0, 00:33:24.520 "w_mbytes_per_sec": 0 00:33:24.520 }, 00:33:24.520 "claimed": false, 00:33:24.520 "zoned": false, 00:33:24.520 "supported_io_types": { 00:33:24.520 "read": true, 00:33:24.520 "write": true, 00:33:24.520 "unmap": true, 00:33:24.520 "flush": true, 00:33:24.520 "reset": true, 00:33:24.520 "nvme_admin": false, 00:33:24.520 "nvme_io": false, 00:33:24.520 "nvme_io_md": false, 00:33:24.520 "write_zeroes": true, 00:33:24.520 "zcopy": true, 00:33:24.520 "get_zone_info": false, 00:33:24.520 "zone_management": false, 00:33:24.520 "zone_append": false, 00:33:24.520 "compare": false, 00:33:24.520 "compare_and_write": false, 00:33:24.520 "abort": true, 00:33:24.520 "seek_hole": false, 00:33:24.520 "seek_data": false, 00:33:24.520 "copy": true, 00:33:24.520 "nvme_iov_md": false 00:33:24.520 }, 00:33:24.520 "memory_domains": [ 00:33:24.520 { 00:33:24.520 "dma_device_id": "system", 00:33:24.520 "dma_device_type": 1 00:33:24.520 }, 00:33:24.520 { 00:33:24.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:24.520 "dma_device_type": 2 00:33:24.520 } 00:33:24.520 ], 00:33:24.520 "driver_specific": {} 00:33:24.520 } 00:33:24.520 ] 00:33:24.520 19:00:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:24.520 19:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:24.520 19:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:24.520 19:00:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:24.520 [2024-07-25 19:00:25.021789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:24.520 [2024-07-25 19:00:25.021852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:24.520 [2024-07-25 19:00:25.021873] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:24.520 [2024-07-25 19:00:25.023661] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:24.520 [2024-07-25 19:00:25.023714] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:24.520 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.779 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.779 "name": "Existed_Raid", 00:33:24.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.779 "strip_size_kb": 64, 00:33:24.779 "state": "configuring", 00:33:24.779 "raid_level": "raid5f", 00:33:24.779 "superblock": false, 00:33:24.779 "num_base_bdevs": 4, 00:33:24.779 "num_base_bdevs_discovered": 3, 00:33:24.779 "num_base_bdevs_operational": 4, 00:33:24.779 "base_bdevs_list": [ 00:33:24.779 { 00:33:24.779 "name": "BaseBdev1", 00:33:24.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.779 "is_configured": false, 00:33:24.779 "data_offset": 0, 00:33:24.779 "data_size": 0 00:33:24.779 }, 00:33:24.779 { 00:33:24.779 "name": "BaseBdev2", 00:33:24.779 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:24.779 "is_configured": true, 00:33:24.779 "data_offset": 0, 00:33:24.779 "data_size": 65536 00:33:24.779 }, 00:33:24.779 { 00:33:24.779 "name": "BaseBdev3", 00:33:24.779 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:24.779 "is_configured": true, 00:33:24.779 "data_offset": 0, 00:33:24.779 "data_size": 65536 00:33:24.779 }, 00:33:24.779 { 00:33:24.779 "name": "BaseBdev4", 00:33:24.779 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:24.779 "is_configured": true, 00:33:24.779 "data_offset": 0, 00:33:24.779 "data_size": 65536 00:33:24.779 } 00:33:24.779 ] 00:33:24.779 }' 00:33:24.779 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.779 19:00:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:25.346 [2024-07-25 19:00:25.905956] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:25.346 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:25.604 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:25.604 19:00:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.604 19:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:25.604 "name": "Existed_Raid", 00:33:25.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:25.604 "strip_size_kb": 64, 00:33:25.604 "state": "configuring", 00:33:25.604 "raid_level": "raid5f", 00:33:25.604 "superblock": false, 00:33:25.604 "num_base_bdevs": 4, 00:33:25.604 "num_base_bdevs_discovered": 2, 00:33:25.604 "num_base_bdevs_operational": 4, 00:33:25.604 "base_bdevs_list": [ 00:33:25.604 { 00:33:25.604 "name": "BaseBdev1", 00:33:25.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:25.604 "is_configured": false, 00:33:25.604 "data_offset": 0, 00:33:25.604 "data_size": 0 00:33:25.604 }, 00:33:25.604 { 00:33:25.604 "name": null, 00:33:25.604 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:25.604 "is_configured": false, 00:33:25.604 "data_offset": 0, 00:33:25.604 "data_size": 65536 00:33:25.604 }, 00:33:25.604 { 00:33:25.604 "name": "BaseBdev3", 00:33:25.604 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:25.604 "is_configured": true, 00:33:25.604 "data_offset": 0, 00:33:25.604 "data_size": 65536 00:33:25.604 }, 00:33:25.604 { 00:33:25.604 "name": "BaseBdev4", 00:33:25.604 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:25.604 "is_configured": true, 00:33:25.604 "data_offset": 0, 00:33:25.604 "data_size": 65536 00:33:25.604 } 00:33:25.604 ] 00:33:25.604 }' 00:33:25.604 19:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:25.604 19:00:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.170 19:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.170 19:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:26.429 19:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:33:26.429 19:00:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:26.687 [2024-07-25 19:00:27.223863] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:26.687 BaseBdev1 00:33:26.687 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:33:26.687 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:26.687 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:26.687 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:26.687 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:26.687 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:26.687 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:26.946 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:27.205 [ 00:33:27.205 { 00:33:27.205 "name": "BaseBdev1", 00:33:27.205 "aliases": [ 00:33:27.205 "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6" 00:33:27.205 ], 00:33:27.205 "product_name": "Malloc disk", 00:33:27.205 "block_size": 512, 00:33:27.205 "num_blocks": 65536, 00:33:27.205 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:27.205 "assigned_rate_limits": { 00:33:27.205 "rw_ios_per_sec": 0, 00:33:27.205 "rw_mbytes_per_sec": 0, 00:33:27.205 "r_mbytes_per_sec": 0, 00:33:27.205 "w_mbytes_per_sec": 0 00:33:27.205 }, 00:33:27.205 "claimed": true, 00:33:27.205 "claim_type": "exclusive_write", 00:33:27.205 "zoned": false, 00:33:27.205 "supported_io_types": { 00:33:27.205 "read": true, 00:33:27.205 "write": true, 00:33:27.205 "unmap": true, 00:33:27.205 "flush": true, 00:33:27.205 "reset": true, 00:33:27.205 "nvme_admin": false, 00:33:27.205 "nvme_io": false, 00:33:27.205 "nvme_io_md": false, 00:33:27.205 "write_zeroes": true, 00:33:27.205 "zcopy": true, 00:33:27.205 "get_zone_info": false, 00:33:27.205 "zone_management": false, 00:33:27.205 "zone_append": false, 00:33:27.205 "compare": false, 00:33:27.205 "compare_and_write": false, 00:33:27.205 "abort": true, 00:33:27.205 "seek_hole": false, 00:33:27.205 "seek_data": false, 00:33:27.205 "copy": true, 00:33:27.205 "nvme_iov_md": false 00:33:27.205 }, 00:33:27.205 "memory_domains": [ 00:33:27.205 { 00:33:27.205 "dma_device_id": "system", 00:33:27.205 "dma_device_type": 1 00:33:27.205 }, 00:33:27.205 { 00:33:27.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.205 "dma_device_type": 2 00:33:27.205 } 00:33:27.205 ], 00:33:27.206 "driver_specific": {} 00:33:27.206 } 00:33:27.206 ] 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:27.206 "name": "Existed_Raid", 00:33:27.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.206 "strip_size_kb": 64, 00:33:27.206 "state": "configuring", 00:33:27.206 "raid_level": "raid5f", 00:33:27.206 "superblock": false, 00:33:27.206 "num_base_bdevs": 4, 00:33:27.206 "num_base_bdevs_discovered": 3, 00:33:27.206 "num_base_bdevs_operational": 4, 00:33:27.206 "base_bdevs_list": [ 00:33:27.206 { 00:33:27.206 "name": "BaseBdev1", 00:33:27.206 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:27.206 "is_configured": true, 00:33:27.206 "data_offset": 0, 00:33:27.206 "data_size": 65536 00:33:27.206 }, 00:33:27.206 { 00:33:27.206 "name": null, 00:33:27.206 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:27.206 "is_configured": false, 00:33:27.206 "data_offset": 0, 00:33:27.206 "data_size": 65536 00:33:27.206 }, 00:33:27.206 { 00:33:27.206 "name": "BaseBdev3", 00:33:27.206 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:27.206 "is_configured": true, 00:33:27.206 "data_offset": 0, 00:33:27.206 "data_size": 65536 00:33:27.206 }, 00:33:27.206 { 00:33:27.206 "name": "BaseBdev4", 00:33:27.206 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:27.206 "is_configured": true, 00:33:27.206 "data_offset": 0, 00:33:27.206 "data_size": 65536 00:33:27.206 } 00:33:27.206 ] 00:33:27.206 }' 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:27.206 19:00:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.774 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.774 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:28.034 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:33:28.034 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:33:28.292 [2024-07-25 19:00:28.772357] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:28.292 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.293 19:00:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:28.551 19:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:28.551 "name": "Existed_Raid", 00:33:28.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.551 "strip_size_kb": 64, 00:33:28.551 "state": "configuring", 00:33:28.551 "raid_level": "raid5f", 00:33:28.551 "superblock": false, 00:33:28.551 "num_base_bdevs": 4, 00:33:28.551 "num_base_bdevs_discovered": 2, 00:33:28.551 "num_base_bdevs_operational": 4, 00:33:28.551 "base_bdevs_list": [ 00:33:28.551 { 00:33:28.551 "name": "BaseBdev1", 00:33:28.551 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:28.551 "is_configured": true, 00:33:28.551 "data_offset": 0, 00:33:28.551 "data_size": 65536 00:33:28.551 }, 00:33:28.551 { 00:33:28.551 "name": null, 00:33:28.551 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:28.551 "is_configured": false, 00:33:28.551 "data_offset": 0, 00:33:28.551 "data_size": 65536 00:33:28.551 }, 00:33:28.551 { 00:33:28.551 "name": null, 00:33:28.551 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:28.551 "is_configured": false, 00:33:28.551 "data_offset": 0, 00:33:28.551 "data_size": 65536 00:33:28.551 }, 00:33:28.551 { 00:33:28.551 "name": "BaseBdev4", 00:33:28.551 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:28.551 "is_configured": true, 00:33:28.551 "data_offset": 0, 00:33:28.551 "data_size": 65536 00:33:28.551 } 00:33:28.551 ] 00:33:28.551 }' 00:33:28.551 19:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:28.551 19:00:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.118 19:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.118 19:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:29.376 19:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:33:29.376 19:00:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:29.634 [2024-07-25 19:00:29.988540] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:29.634 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.892 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:29.892 "name": "Existed_Raid", 00:33:29.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.892 "strip_size_kb": 64, 00:33:29.892 "state": "configuring", 00:33:29.892 "raid_level": "raid5f", 00:33:29.892 "superblock": false, 00:33:29.892 "num_base_bdevs": 4, 00:33:29.892 "num_base_bdevs_discovered": 3, 00:33:29.892 "num_base_bdevs_operational": 4, 00:33:29.892 "base_bdevs_list": [ 00:33:29.892 { 00:33:29.892 "name": "BaseBdev1", 00:33:29.892 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:29.893 "is_configured": true, 00:33:29.893 "data_offset": 0, 00:33:29.893 "data_size": 65536 00:33:29.893 }, 00:33:29.893 { 00:33:29.893 "name": null, 00:33:29.893 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:29.893 "is_configured": false, 00:33:29.893 "data_offset": 0, 00:33:29.893 "data_size": 65536 00:33:29.893 }, 00:33:29.893 { 00:33:29.893 "name": "BaseBdev3", 00:33:29.893 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:29.893 "is_configured": true, 00:33:29.893 "data_offset": 0, 00:33:29.893 "data_size": 65536 00:33:29.893 }, 00:33:29.893 { 00:33:29.893 "name": "BaseBdev4", 00:33:29.893 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:29.893 "is_configured": true, 00:33:29.893 "data_offset": 0, 00:33:29.893 "data_size": 65536 00:33:29.893 } 00:33:29.893 ] 00:33:29.893 }' 00:33:29.893 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:29.893 19:00:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.459 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.459 19:00:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:30.459 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:33:30.459 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:30.718 [2024-07-25 19:00:31.271243] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:30.976 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.234 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:31.234 "name": "Existed_Raid", 00:33:31.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.234 "strip_size_kb": 64, 00:33:31.234 "state": "configuring", 00:33:31.234 "raid_level": "raid5f", 00:33:31.234 "superblock": false, 00:33:31.234 "num_base_bdevs": 4, 00:33:31.234 "num_base_bdevs_discovered": 2, 00:33:31.234 "num_base_bdevs_operational": 4, 00:33:31.234 "base_bdevs_list": [ 00:33:31.234 { 00:33:31.234 "name": null, 00:33:31.234 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:31.234 "is_configured": false, 00:33:31.234 "data_offset": 0, 00:33:31.234 "data_size": 65536 00:33:31.234 }, 00:33:31.234 { 00:33:31.234 "name": null, 00:33:31.234 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:31.234 "is_configured": false, 00:33:31.234 "data_offset": 0, 00:33:31.234 "data_size": 65536 00:33:31.234 }, 00:33:31.234 { 00:33:31.234 "name": "BaseBdev3", 00:33:31.235 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:31.235 "is_configured": true, 00:33:31.235 "data_offset": 0, 00:33:31.235 "data_size": 65536 00:33:31.235 }, 00:33:31.235 { 00:33:31.235 "name": "BaseBdev4", 00:33:31.235 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:31.235 "is_configured": true, 00:33:31.235 "data_offset": 0, 00:33:31.235 "data_size": 65536 00:33:31.235 } 00:33:31.235 ] 00:33:31.235 }' 00:33:31.235 19:00:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:31.235 19:00:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.800 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.800 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:31.800 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:33:31.800 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:32.056 [2024-07-25 19:00:32.499056] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.056 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:32.313 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.313 "name": "Existed_Raid", 00:33:32.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.313 "strip_size_kb": 64, 00:33:32.313 "state": "configuring", 00:33:32.313 "raid_level": "raid5f", 00:33:32.313 "superblock": false, 00:33:32.313 "num_base_bdevs": 4, 00:33:32.313 "num_base_bdevs_discovered": 3, 00:33:32.313 "num_base_bdevs_operational": 4, 00:33:32.313 "base_bdevs_list": [ 00:33:32.313 { 00:33:32.313 "name": null, 00:33:32.313 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:32.313 "is_configured": false, 00:33:32.313 "data_offset": 0, 00:33:32.313 "data_size": 65536 00:33:32.313 }, 00:33:32.313 { 00:33:32.313 "name": "BaseBdev2", 00:33:32.313 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:32.313 "is_configured": true, 00:33:32.313 "data_offset": 0, 00:33:32.313 "data_size": 65536 00:33:32.313 }, 00:33:32.313 { 00:33:32.313 "name": "BaseBdev3", 00:33:32.313 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:32.313 "is_configured": true, 00:33:32.313 "data_offset": 0, 00:33:32.313 "data_size": 65536 00:33:32.313 }, 00:33:32.313 { 00:33:32.313 "name": "BaseBdev4", 00:33:32.313 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:32.313 "is_configured": true, 00:33:32.313 "data_offset": 0, 00:33:32.313 "data_size": 65536 00:33:32.313 } 00:33:32.313 ] 00:33:32.313 }' 00:33:32.313 19:00:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.313 19:00:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.879 19:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.879 19:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:33.145 19:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:33:33.145 19:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:33.145 19:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6 00:33:33.405 [2024-07-25 19:00:33.924068] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:33.405 [2024-07-25 19:00:33.924381] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:33:33.405 [2024-07-25 19:00:33.924457] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:33.405 [2024-07-25 19:00:33.924658] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:33.405 [2024-07-25 19:00:33.929463] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:33:33.405 [2024-07-25 19:00:33.929614] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:33:33.405 [2024-07-25 19:00:33.930001] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:33.405 NewBaseBdev 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:33.405 19:00:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:33.663 19:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:33.921 [ 00:33:33.921 { 00:33:33.921 "name": "NewBaseBdev", 00:33:33.921 "aliases": [ 00:33:33.921 "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6" 00:33:33.921 ], 00:33:33.921 "product_name": "Malloc disk", 00:33:33.921 "block_size": 512, 00:33:33.921 "num_blocks": 65536, 00:33:33.921 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:33.921 "assigned_rate_limits": { 00:33:33.921 "rw_ios_per_sec": 0, 00:33:33.921 "rw_mbytes_per_sec": 0, 00:33:33.921 "r_mbytes_per_sec": 0, 00:33:33.921 "w_mbytes_per_sec": 0 00:33:33.921 }, 00:33:33.921 "claimed": true, 00:33:33.921 "claim_type": "exclusive_write", 00:33:33.921 "zoned": false, 00:33:33.921 "supported_io_types": { 00:33:33.921 "read": true, 00:33:33.921 "write": true, 00:33:33.921 "unmap": true, 00:33:33.921 "flush": true, 00:33:33.921 "reset": true, 00:33:33.921 "nvme_admin": false, 00:33:33.921 "nvme_io": false, 00:33:33.921 "nvme_io_md": false, 00:33:33.921 "write_zeroes": true, 00:33:33.921 "zcopy": true, 00:33:33.921 "get_zone_info": false, 00:33:33.922 "zone_management": false, 00:33:33.922 "zone_append": false, 00:33:33.922 "compare": false, 00:33:33.922 "compare_and_write": false, 00:33:33.922 "abort": true, 00:33:33.922 "seek_hole": false, 00:33:33.922 "seek_data": false, 00:33:33.922 "copy": true, 00:33:33.922 "nvme_iov_md": false 00:33:33.922 }, 00:33:33.922 "memory_domains": [ 00:33:33.922 { 00:33:33.922 "dma_device_id": "system", 00:33:33.922 "dma_device_type": 1 00:33:33.922 }, 00:33:33.922 { 00:33:33.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:33.922 "dma_device_type": 2 00:33:33.922 } 00:33:33.922 ], 00:33:33.922 "driver_specific": {} 00:33:33.922 } 00:33:33.922 ] 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.922 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:34.180 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.180 "name": "Existed_Raid", 00:33:34.180 "uuid": "e7673662-5cee-4e50-9a02-a3444e714f46", 00:33:34.180 "strip_size_kb": 64, 00:33:34.180 "state": "online", 00:33:34.180 "raid_level": "raid5f", 00:33:34.180 "superblock": false, 00:33:34.180 "num_base_bdevs": 4, 00:33:34.180 "num_base_bdevs_discovered": 4, 00:33:34.180 "num_base_bdevs_operational": 4, 00:33:34.180 "base_bdevs_list": [ 00:33:34.180 { 00:33:34.180 "name": "NewBaseBdev", 00:33:34.180 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:34.180 "is_configured": true, 00:33:34.180 "data_offset": 0, 00:33:34.180 "data_size": 65536 00:33:34.180 }, 00:33:34.180 { 00:33:34.180 "name": "BaseBdev2", 00:33:34.180 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:34.180 "is_configured": true, 00:33:34.180 "data_offset": 0, 00:33:34.180 "data_size": 65536 00:33:34.180 }, 00:33:34.180 { 00:33:34.180 "name": "BaseBdev3", 00:33:34.180 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:34.180 "is_configured": true, 00:33:34.180 "data_offset": 0, 00:33:34.180 "data_size": 65536 00:33:34.180 }, 00:33:34.180 { 00:33:34.180 "name": "BaseBdev4", 00:33:34.180 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:34.180 "is_configured": true, 00:33:34.180 "data_offset": 0, 00:33:34.180 "data_size": 65536 00:33:34.180 } 00:33:34.180 ] 00:33:34.180 }' 00:33:34.180 19:00:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.180 19:00:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:34.748 [2024-07-25 19:00:35.274621] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:34.748 "name": "Existed_Raid", 00:33:34.748 "aliases": [ 00:33:34.748 "e7673662-5cee-4e50-9a02-a3444e714f46" 00:33:34.748 ], 00:33:34.748 "product_name": "Raid Volume", 00:33:34.748 "block_size": 512, 00:33:34.748 "num_blocks": 196608, 00:33:34.748 "uuid": "e7673662-5cee-4e50-9a02-a3444e714f46", 00:33:34.748 "assigned_rate_limits": { 00:33:34.748 "rw_ios_per_sec": 0, 00:33:34.748 "rw_mbytes_per_sec": 0, 00:33:34.748 "r_mbytes_per_sec": 0, 00:33:34.748 "w_mbytes_per_sec": 0 00:33:34.748 }, 00:33:34.748 "claimed": false, 00:33:34.748 "zoned": false, 00:33:34.748 "supported_io_types": { 00:33:34.748 "read": true, 00:33:34.748 "write": true, 00:33:34.748 "unmap": false, 00:33:34.748 "flush": false, 00:33:34.748 "reset": true, 00:33:34.748 "nvme_admin": false, 00:33:34.748 "nvme_io": false, 00:33:34.748 "nvme_io_md": false, 00:33:34.748 "write_zeroes": true, 00:33:34.748 "zcopy": false, 00:33:34.748 "get_zone_info": false, 00:33:34.748 "zone_management": false, 00:33:34.748 "zone_append": false, 00:33:34.748 "compare": false, 00:33:34.748 "compare_and_write": false, 00:33:34.748 "abort": false, 00:33:34.748 "seek_hole": false, 00:33:34.748 "seek_data": false, 00:33:34.748 "copy": false, 00:33:34.748 "nvme_iov_md": false 00:33:34.748 }, 00:33:34.748 "driver_specific": { 00:33:34.748 "raid": { 00:33:34.748 "uuid": "e7673662-5cee-4e50-9a02-a3444e714f46", 00:33:34.748 "strip_size_kb": 64, 00:33:34.748 "state": "online", 00:33:34.748 "raid_level": "raid5f", 00:33:34.748 "superblock": false, 00:33:34.748 "num_base_bdevs": 4, 00:33:34.748 "num_base_bdevs_discovered": 4, 00:33:34.748 "num_base_bdevs_operational": 4, 00:33:34.748 "base_bdevs_list": [ 00:33:34.748 { 00:33:34.748 "name": "NewBaseBdev", 00:33:34.748 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:34.748 "is_configured": true, 00:33:34.748 "data_offset": 0, 00:33:34.748 "data_size": 65536 00:33:34.748 }, 00:33:34.748 { 00:33:34.748 "name": "BaseBdev2", 00:33:34.748 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:34.748 "is_configured": true, 00:33:34.748 "data_offset": 0, 00:33:34.748 "data_size": 65536 00:33:34.748 }, 00:33:34.748 { 00:33:34.748 "name": "BaseBdev3", 00:33:34.748 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:34.748 "is_configured": true, 00:33:34.748 "data_offset": 0, 00:33:34.748 "data_size": 65536 00:33:34.748 }, 00:33:34.748 { 00:33:34.748 "name": "BaseBdev4", 00:33:34.748 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:34.748 "is_configured": true, 00:33:34.748 "data_offset": 0, 00:33:34.748 "data_size": 65536 00:33:34.748 } 00:33:34.748 ] 00:33:34.748 } 00:33:34.748 } 00:33:34.748 }' 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:33:34.748 BaseBdev2 00:33:34.748 BaseBdev3 00:33:34.748 BaseBdev4' 00:33:34.748 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:35.006 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:33:35.007 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:35.007 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:35.007 "name": "NewBaseBdev", 00:33:35.007 "aliases": [ 00:33:35.007 "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6" 00:33:35.007 ], 00:33:35.007 "product_name": "Malloc disk", 00:33:35.007 "block_size": 512, 00:33:35.007 "num_blocks": 65536, 00:33:35.007 "uuid": "955d38b0-6c14-4cfe-80d4-9b08b8a8d4b6", 00:33:35.007 "assigned_rate_limits": { 00:33:35.007 "rw_ios_per_sec": 0, 00:33:35.007 "rw_mbytes_per_sec": 0, 00:33:35.007 "r_mbytes_per_sec": 0, 00:33:35.007 "w_mbytes_per_sec": 0 00:33:35.007 }, 00:33:35.007 "claimed": true, 00:33:35.007 "claim_type": "exclusive_write", 00:33:35.007 "zoned": false, 00:33:35.007 "supported_io_types": { 00:33:35.007 "read": true, 00:33:35.007 "write": true, 00:33:35.007 "unmap": true, 00:33:35.007 "flush": true, 00:33:35.007 "reset": true, 00:33:35.007 "nvme_admin": false, 00:33:35.007 "nvme_io": false, 00:33:35.007 "nvme_io_md": false, 00:33:35.007 "write_zeroes": true, 00:33:35.007 "zcopy": true, 00:33:35.007 "get_zone_info": false, 00:33:35.007 "zone_management": false, 00:33:35.007 "zone_append": false, 00:33:35.007 "compare": false, 00:33:35.007 "compare_and_write": false, 00:33:35.007 "abort": true, 00:33:35.007 "seek_hole": false, 00:33:35.007 "seek_data": false, 00:33:35.007 "copy": true, 00:33:35.007 "nvme_iov_md": false 00:33:35.007 }, 00:33:35.007 "memory_domains": [ 00:33:35.007 { 00:33:35.007 "dma_device_id": "system", 00:33:35.007 "dma_device_type": 1 00:33:35.007 }, 00:33:35.007 { 00:33:35.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:35.007 "dma_device_type": 2 00:33:35.007 } 00:33:35.007 ], 00:33:35.007 "driver_specific": {} 00:33:35.007 }' 00:33:35.007 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:35.007 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:35.265 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:35.523 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:35.523 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:35.523 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:35.523 19:00:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:35.781 "name": "BaseBdev2", 00:33:35.781 "aliases": [ 00:33:35.781 "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9" 00:33:35.781 ], 00:33:35.781 "product_name": "Malloc disk", 00:33:35.781 "block_size": 512, 00:33:35.781 "num_blocks": 65536, 00:33:35.781 "uuid": "ae7d89bc-e7a5-48cc-afd4-5c8375cadba9", 00:33:35.781 "assigned_rate_limits": { 00:33:35.781 "rw_ios_per_sec": 0, 00:33:35.781 "rw_mbytes_per_sec": 0, 00:33:35.781 "r_mbytes_per_sec": 0, 00:33:35.781 "w_mbytes_per_sec": 0 00:33:35.781 }, 00:33:35.781 "claimed": true, 00:33:35.781 "claim_type": "exclusive_write", 00:33:35.781 "zoned": false, 00:33:35.781 "supported_io_types": { 00:33:35.781 "read": true, 00:33:35.781 "write": true, 00:33:35.781 "unmap": true, 00:33:35.781 "flush": true, 00:33:35.781 "reset": true, 00:33:35.781 "nvme_admin": false, 00:33:35.781 "nvme_io": false, 00:33:35.781 "nvme_io_md": false, 00:33:35.781 "write_zeroes": true, 00:33:35.781 "zcopy": true, 00:33:35.781 "get_zone_info": false, 00:33:35.781 "zone_management": false, 00:33:35.781 "zone_append": false, 00:33:35.781 "compare": false, 00:33:35.781 "compare_and_write": false, 00:33:35.781 "abort": true, 00:33:35.781 "seek_hole": false, 00:33:35.781 "seek_data": false, 00:33:35.781 "copy": true, 00:33:35.781 "nvme_iov_md": false 00:33:35.781 }, 00:33:35.781 "memory_domains": [ 00:33:35.781 { 00:33:35.781 "dma_device_id": "system", 00:33:35.781 "dma_device_type": 1 00:33:35.781 }, 00:33:35.781 { 00:33:35.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:35.781 "dma_device_type": 2 00:33:35.781 } 00:33:35.781 ], 00:33:35.781 "driver_specific": {} 00:33:35.781 }' 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:35.781 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:36.040 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:36.299 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:36.299 "name": "BaseBdev3", 00:33:36.299 "aliases": [ 00:33:36.299 "4aa086c0-f6b0-466b-b0e7-9e3746746bc7" 00:33:36.299 ], 00:33:36.299 "product_name": "Malloc disk", 00:33:36.299 "block_size": 512, 00:33:36.299 "num_blocks": 65536, 00:33:36.299 "uuid": "4aa086c0-f6b0-466b-b0e7-9e3746746bc7", 00:33:36.299 "assigned_rate_limits": { 00:33:36.299 "rw_ios_per_sec": 0, 00:33:36.299 "rw_mbytes_per_sec": 0, 00:33:36.299 "r_mbytes_per_sec": 0, 00:33:36.299 "w_mbytes_per_sec": 0 00:33:36.299 }, 00:33:36.299 "claimed": true, 00:33:36.299 "claim_type": "exclusive_write", 00:33:36.299 "zoned": false, 00:33:36.299 "supported_io_types": { 00:33:36.299 "read": true, 00:33:36.299 "write": true, 00:33:36.299 "unmap": true, 00:33:36.299 "flush": true, 00:33:36.299 "reset": true, 00:33:36.299 "nvme_admin": false, 00:33:36.299 "nvme_io": false, 00:33:36.299 "nvme_io_md": false, 00:33:36.299 "write_zeroes": true, 00:33:36.299 "zcopy": true, 00:33:36.299 "get_zone_info": false, 00:33:36.299 "zone_management": false, 00:33:36.299 "zone_append": false, 00:33:36.299 "compare": false, 00:33:36.299 "compare_and_write": false, 00:33:36.299 "abort": true, 00:33:36.299 "seek_hole": false, 00:33:36.299 "seek_data": false, 00:33:36.299 "copy": true, 00:33:36.299 "nvme_iov_md": false 00:33:36.299 }, 00:33:36.299 "memory_domains": [ 00:33:36.299 { 00:33:36.299 "dma_device_id": "system", 00:33:36.299 "dma_device_type": 1 00:33:36.299 }, 00:33:36.299 { 00:33:36.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:36.299 "dma_device_type": 2 00:33:36.299 } 00:33:36.299 ], 00:33:36.299 "driver_specific": {} 00:33:36.299 }' 00:33:36.299 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:36.299 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:36.299 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:36.299 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:36.558 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:36.559 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:36.559 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.559 19:00:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:36.559 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:36.559 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:36.559 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:36.559 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:36.559 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:36.559 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:36.559 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:36.818 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:36.818 "name": "BaseBdev4", 00:33:36.818 "aliases": [ 00:33:36.818 "3f775c34-2753-40a3-b44a-433970a4fd5c" 00:33:36.818 ], 00:33:36.818 "product_name": "Malloc disk", 00:33:36.818 "block_size": 512, 00:33:36.818 "num_blocks": 65536, 00:33:36.818 "uuid": "3f775c34-2753-40a3-b44a-433970a4fd5c", 00:33:36.818 "assigned_rate_limits": { 00:33:36.818 "rw_ios_per_sec": 0, 00:33:36.818 "rw_mbytes_per_sec": 0, 00:33:36.818 "r_mbytes_per_sec": 0, 00:33:36.818 "w_mbytes_per_sec": 0 00:33:36.818 }, 00:33:36.818 "claimed": true, 00:33:36.818 "claim_type": "exclusive_write", 00:33:36.818 "zoned": false, 00:33:36.818 "supported_io_types": { 00:33:36.818 "read": true, 00:33:36.818 "write": true, 00:33:36.818 "unmap": true, 00:33:36.818 "flush": true, 00:33:36.818 "reset": true, 00:33:36.818 "nvme_admin": false, 00:33:36.818 "nvme_io": false, 00:33:36.818 "nvme_io_md": false, 00:33:36.818 "write_zeroes": true, 00:33:36.818 "zcopy": true, 00:33:36.818 "get_zone_info": false, 00:33:36.818 "zone_management": false, 00:33:36.818 "zone_append": false, 00:33:36.818 "compare": false, 00:33:36.818 "compare_and_write": false, 00:33:36.818 "abort": true, 00:33:36.818 "seek_hole": false, 00:33:36.818 "seek_data": false, 00:33:36.818 "copy": true, 00:33:36.818 "nvme_iov_md": false 00:33:36.818 }, 00:33:36.818 "memory_domains": [ 00:33:36.818 { 00:33:36.818 "dma_device_id": "system", 00:33:36.818 "dma_device_type": 1 00:33:36.818 }, 00:33:36.818 { 00:33:36.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:36.818 "dma_device_type": 2 00:33:36.818 } 00:33:36.818 ], 00:33:36.818 "driver_specific": {} 00:33:36.818 }' 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:37.076 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:37.335 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:37.335 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:37.335 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:37.594 [2024-07-25 19:00:37.966264] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:37.594 [2024-07-25 19:00:37.966453] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:37.594 [2024-07-25 19:00:37.966665] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:37.594 [2024-07-25 19:00:37.966955] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:37.594 [2024-07-25 19:00:37.967043] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:33:37.594 19:00:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 153251 00:33:37.594 19:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 153251 ']' 00:33:37.594 19:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 153251 00:33:37.594 19:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:33:37.594 19:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:37.594 19:00:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 153251 00:33:37.594 19:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:37.594 19:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:37.594 19:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 153251' 00:33:37.594 killing process with pid 153251 00:33:37.594 19:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 153251 00:33:37.594 [2024-07-25 19:00:38.015225] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:37.594 19:00:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 153251 00:33:37.853 [2024-07-25 19:00:38.321019] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:38.885 ************************************ 00:33:38.885 END TEST raid5f_state_function_test 00:33:38.885 ************************************ 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:33:38.885 00:33:38.885 real 0m31.752s 00:33:38.885 user 0m56.956s 00:33:38.885 sys 0m5.315s 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.885 19:00:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:33:38.885 19:00:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:38.885 19:00:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.885 19:00:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:38.885 ************************************ 00:33:38.885 START TEST raid5f_state_function_test_sb 00:33:38.885 ************************************ 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=154317 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 154317' 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:38.885 Process raid pid: 154317 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 154317 /var/tmp/spdk-raid.sock 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 154317 ']' 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.885 19:00:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:39.145 [2024-07-25 19:00:39.538696] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:39.145 [2024-07-25 19:00:39.539252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.145 [2024-07-25 19:00:39.722309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.404 [2024-07-25 19:00:39.936697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.663 [2024-07-25 19:00:40.131371] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:39.923 19:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:39.923 19:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:33:39.923 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:40.182 [2024-07-25 19:00:40.647693] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:40.182 [2024-07-25 19:00:40.647966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:40.182 [2024-07-25 19:00:40.648056] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:40.182 [2024-07-25 19:00:40.648163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:40.182 [2024-07-25 19:00:40.648251] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:40.182 [2024-07-25 19:00:40.648307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:40.182 [2024-07-25 19:00:40.648377] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:40.182 [2024-07-25 19:00:40.648428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.182 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.183 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.183 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.183 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.183 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.441 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.441 "name": "Existed_Raid", 00:33:40.441 "uuid": "67bb909d-3a86-43bd-a1fd-7bbf9f2fdc94", 00:33:40.441 "strip_size_kb": 64, 00:33:40.441 "state": "configuring", 00:33:40.441 "raid_level": "raid5f", 00:33:40.441 "superblock": true, 00:33:40.441 "num_base_bdevs": 4, 00:33:40.441 "num_base_bdevs_discovered": 0, 00:33:40.441 "num_base_bdevs_operational": 4, 00:33:40.441 "base_bdevs_list": [ 00:33:40.441 { 00:33:40.441 "name": "BaseBdev1", 00:33:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.441 "is_configured": false, 00:33:40.441 "data_offset": 0, 00:33:40.441 "data_size": 0 00:33:40.441 }, 00:33:40.441 { 00:33:40.441 "name": "BaseBdev2", 00:33:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.441 "is_configured": false, 00:33:40.441 "data_offset": 0, 00:33:40.441 "data_size": 0 00:33:40.441 }, 00:33:40.441 { 00:33:40.441 "name": "BaseBdev3", 00:33:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.441 "is_configured": false, 00:33:40.441 "data_offset": 0, 00:33:40.441 "data_size": 0 00:33:40.441 }, 00:33:40.441 { 00:33:40.441 "name": "BaseBdev4", 00:33:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.441 "is_configured": false, 00:33:40.441 "data_offset": 0, 00:33:40.441 "data_size": 0 00:33:40.441 } 00:33:40.441 ] 00:33:40.441 }' 00:33:40.441 19:00:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.441 19:00:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:41.010 19:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:41.269 [2024-07-25 19:00:41.619731] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:41.269 [2024-07-25 19:00:41.619935] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:33:41.269 19:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:41.528 [2024-07-25 19:00:41.899832] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:41.528 [2024-07-25 19:00:41.900040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:41.528 [2024-07-25 19:00:41.900143] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:41.528 [2024-07-25 19:00:41.900267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:41.528 [2024-07-25 19:00:41.900340] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:41.528 [2024-07-25 19:00:41.900412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:41.528 [2024-07-25 19:00:41.900492] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:41.528 [2024-07-25 19:00:41.900555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:41.528 19:00:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:41.787 [2024-07-25 19:00:42.111397] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:41.788 BaseBdev1 00:33:41.788 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:41.788 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:41.788 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:41.788 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:41.788 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:41.788 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:41.788 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:42.047 [ 00:33:42.047 { 00:33:42.047 "name": "BaseBdev1", 00:33:42.047 "aliases": [ 00:33:42.047 "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a" 00:33:42.047 ], 00:33:42.047 "product_name": "Malloc disk", 00:33:42.047 "block_size": 512, 00:33:42.047 "num_blocks": 65536, 00:33:42.047 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:42.047 "assigned_rate_limits": { 00:33:42.047 "rw_ios_per_sec": 0, 00:33:42.047 "rw_mbytes_per_sec": 0, 00:33:42.047 "r_mbytes_per_sec": 0, 00:33:42.047 "w_mbytes_per_sec": 0 00:33:42.047 }, 00:33:42.047 "claimed": true, 00:33:42.047 "claim_type": "exclusive_write", 00:33:42.047 "zoned": false, 00:33:42.047 "supported_io_types": { 00:33:42.047 "read": true, 00:33:42.047 "write": true, 00:33:42.047 "unmap": true, 00:33:42.047 "flush": true, 00:33:42.047 "reset": true, 00:33:42.047 "nvme_admin": false, 00:33:42.047 "nvme_io": false, 00:33:42.047 "nvme_io_md": false, 00:33:42.047 "write_zeroes": true, 00:33:42.047 "zcopy": true, 00:33:42.047 "get_zone_info": false, 00:33:42.047 "zone_management": false, 00:33:42.047 "zone_append": false, 00:33:42.047 "compare": false, 00:33:42.047 "compare_and_write": false, 00:33:42.047 "abort": true, 00:33:42.047 "seek_hole": false, 00:33:42.047 "seek_data": false, 00:33:42.047 "copy": true, 00:33:42.047 "nvme_iov_md": false 00:33:42.047 }, 00:33:42.047 "memory_domains": [ 00:33:42.047 { 00:33:42.047 "dma_device_id": "system", 00:33:42.047 "dma_device_type": 1 00:33:42.047 }, 00:33:42.047 { 00:33:42.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.047 "dma_device_type": 2 00:33:42.047 } 00:33:42.047 ], 00:33:42.047 "driver_specific": {} 00:33:42.047 } 00:33:42.047 ] 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.047 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:42.306 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:42.306 "name": "Existed_Raid", 00:33:42.306 "uuid": "84525e15-066f-43ad-bf18-392a2a883e41", 00:33:42.306 "strip_size_kb": 64, 00:33:42.306 "state": "configuring", 00:33:42.306 "raid_level": "raid5f", 00:33:42.306 "superblock": true, 00:33:42.306 "num_base_bdevs": 4, 00:33:42.306 "num_base_bdevs_discovered": 1, 00:33:42.306 "num_base_bdevs_operational": 4, 00:33:42.306 "base_bdevs_list": [ 00:33:42.306 { 00:33:42.306 "name": "BaseBdev1", 00:33:42.306 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:42.306 "is_configured": true, 00:33:42.306 "data_offset": 2048, 00:33:42.306 "data_size": 63488 00:33:42.306 }, 00:33:42.306 { 00:33:42.306 "name": "BaseBdev2", 00:33:42.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.306 "is_configured": false, 00:33:42.306 "data_offset": 0, 00:33:42.306 "data_size": 0 00:33:42.306 }, 00:33:42.306 { 00:33:42.306 "name": "BaseBdev3", 00:33:42.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.306 "is_configured": false, 00:33:42.306 "data_offset": 0, 00:33:42.306 "data_size": 0 00:33:42.306 }, 00:33:42.306 { 00:33:42.306 "name": "BaseBdev4", 00:33:42.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.306 "is_configured": false, 00:33:42.306 "data_offset": 0, 00:33:42.306 "data_size": 0 00:33:42.306 } 00:33:42.306 ] 00:33:42.306 }' 00:33:42.306 19:00:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:42.306 19:00:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:42.872 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:43.130 [2024-07-25 19:00:43.519662] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:43.130 [2024-07-25 19:00:43.519834] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:33:43.130 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:43.130 [2024-07-25 19:00:43.695760] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:43.130 [2024-07-25 19:00:43.698136] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:43.130 [2024-07-25 19:00:43.698311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:43.130 [2024-07-25 19:00:43.698416] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:43.130 [2024-07-25 19:00:43.698524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:43.130 [2024-07-25 19:00:43.698605] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:43.130 [2024-07-25 19:00:43.698658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:43.388 "name": "Existed_Raid", 00:33:43.388 "uuid": "10a87438-17a4-4398-aece-9229bf85642e", 00:33:43.388 "strip_size_kb": 64, 00:33:43.388 "state": "configuring", 00:33:43.388 "raid_level": "raid5f", 00:33:43.388 "superblock": true, 00:33:43.388 "num_base_bdevs": 4, 00:33:43.388 "num_base_bdevs_discovered": 1, 00:33:43.388 "num_base_bdevs_operational": 4, 00:33:43.388 "base_bdevs_list": [ 00:33:43.388 { 00:33:43.388 "name": "BaseBdev1", 00:33:43.388 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:43.388 "is_configured": true, 00:33:43.388 "data_offset": 2048, 00:33:43.388 "data_size": 63488 00:33:43.388 }, 00:33:43.388 { 00:33:43.388 "name": "BaseBdev2", 00:33:43.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.388 "is_configured": false, 00:33:43.388 "data_offset": 0, 00:33:43.388 "data_size": 0 00:33:43.388 }, 00:33:43.388 { 00:33:43.388 "name": "BaseBdev3", 00:33:43.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.388 "is_configured": false, 00:33:43.388 "data_offset": 0, 00:33:43.388 "data_size": 0 00:33:43.388 }, 00:33:43.388 { 00:33:43.388 "name": "BaseBdev4", 00:33:43.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.388 "is_configured": false, 00:33:43.388 "data_offset": 0, 00:33:43.388 "data_size": 0 00:33:43.388 } 00:33:43.388 ] 00:33:43.388 }' 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:43.388 19:00:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:43.955 19:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:44.213 [2024-07-25 19:00:44.637404] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:44.213 BaseBdev2 00:33:44.213 19:00:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:44.213 19:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:44.213 19:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:44.213 19:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:44.213 19:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:44.213 19:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:44.214 19:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:44.472 19:00:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:44.730 [ 00:33:44.730 { 00:33:44.730 "name": "BaseBdev2", 00:33:44.730 "aliases": [ 00:33:44.730 "22e417bd-589c-4c6c-af4d-e325b48b5226" 00:33:44.730 ], 00:33:44.730 "product_name": "Malloc disk", 00:33:44.730 "block_size": 512, 00:33:44.730 "num_blocks": 65536, 00:33:44.730 "uuid": "22e417bd-589c-4c6c-af4d-e325b48b5226", 00:33:44.730 "assigned_rate_limits": { 00:33:44.730 "rw_ios_per_sec": 0, 00:33:44.730 "rw_mbytes_per_sec": 0, 00:33:44.730 "r_mbytes_per_sec": 0, 00:33:44.730 "w_mbytes_per_sec": 0 00:33:44.730 }, 00:33:44.730 "claimed": true, 00:33:44.730 "claim_type": "exclusive_write", 00:33:44.730 "zoned": false, 00:33:44.730 "supported_io_types": { 00:33:44.730 "read": true, 00:33:44.730 "write": true, 00:33:44.730 "unmap": true, 00:33:44.730 "flush": true, 00:33:44.730 "reset": true, 00:33:44.730 "nvme_admin": false, 00:33:44.730 "nvme_io": false, 00:33:44.730 "nvme_io_md": false, 00:33:44.730 "write_zeroes": true, 00:33:44.730 "zcopy": true, 00:33:44.730 "get_zone_info": false, 00:33:44.730 "zone_management": false, 00:33:44.730 "zone_append": false, 00:33:44.730 "compare": false, 00:33:44.730 "compare_and_write": false, 00:33:44.730 "abort": true, 00:33:44.730 "seek_hole": false, 00:33:44.730 "seek_data": false, 00:33:44.730 "copy": true, 00:33:44.730 "nvme_iov_md": false 00:33:44.730 }, 00:33:44.730 "memory_domains": [ 00:33:44.730 { 00:33:44.730 "dma_device_id": "system", 00:33:44.730 "dma_device_type": 1 00:33:44.730 }, 00:33:44.730 { 00:33:44.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.730 "dma_device_type": 2 00:33:44.730 } 00:33:44.730 ], 00:33:44.730 "driver_specific": {} 00:33:44.730 } 00:33:44.730 ] 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.730 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:44.988 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:44.988 "name": "Existed_Raid", 00:33:44.988 "uuid": "10a87438-17a4-4398-aece-9229bf85642e", 00:33:44.988 "strip_size_kb": 64, 00:33:44.988 "state": "configuring", 00:33:44.988 "raid_level": "raid5f", 00:33:44.988 "superblock": true, 00:33:44.988 "num_base_bdevs": 4, 00:33:44.988 "num_base_bdevs_discovered": 2, 00:33:44.988 "num_base_bdevs_operational": 4, 00:33:44.988 "base_bdevs_list": [ 00:33:44.988 { 00:33:44.988 "name": "BaseBdev1", 00:33:44.988 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:44.988 "is_configured": true, 00:33:44.988 "data_offset": 2048, 00:33:44.988 "data_size": 63488 00:33:44.988 }, 00:33:44.988 { 00:33:44.988 "name": "BaseBdev2", 00:33:44.988 "uuid": "22e417bd-589c-4c6c-af4d-e325b48b5226", 00:33:44.988 "is_configured": true, 00:33:44.988 "data_offset": 2048, 00:33:44.988 "data_size": 63488 00:33:44.988 }, 00:33:44.988 { 00:33:44.988 "name": "BaseBdev3", 00:33:44.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.988 "is_configured": false, 00:33:44.988 "data_offset": 0, 00:33:44.988 "data_size": 0 00:33:44.988 }, 00:33:44.988 { 00:33:44.988 "name": "BaseBdev4", 00:33:44.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.988 "is_configured": false, 00:33:44.988 "data_offset": 0, 00:33:44.988 "data_size": 0 00:33:44.988 } 00:33:44.988 ] 00:33:44.988 }' 00:33:44.988 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:44.988 19:00:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:45.554 19:00:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:45.554 [2024-07-25 19:00:46.034097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:45.554 BaseBdev3 00:33:45.554 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:33:45.554 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:45.554 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:45.554 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:45.554 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:45.554 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:45.554 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:45.812 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:46.071 [ 00:33:46.071 { 00:33:46.071 "name": "BaseBdev3", 00:33:46.071 "aliases": [ 00:33:46.071 "36f9d6c7-7f23-467a-bd50-1e2a359b880d" 00:33:46.071 ], 00:33:46.071 "product_name": "Malloc disk", 00:33:46.071 "block_size": 512, 00:33:46.071 "num_blocks": 65536, 00:33:46.071 "uuid": "36f9d6c7-7f23-467a-bd50-1e2a359b880d", 00:33:46.071 "assigned_rate_limits": { 00:33:46.071 "rw_ios_per_sec": 0, 00:33:46.071 "rw_mbytes_per_sec": 0, 00:33:46.071 "r_mbytes_per_sec": 0, 00:33:46.071 "w_mbytes_per_sec": 0 00:33:46.071 }, 00:33:46.071 "claimed": true, 00:33:46.071 "claim_type": "exclusive_write", 00:33:46.071 "zoned": false, 00:33:46.071 "supported_io_types": { 00:33:46.071 "read": true, 00:33:46.071 "write": true, 00:33:46.071 "unmap": true, 00:33:46.071 "flush": true, 00:33:46.071 "reset": true, 00:33:46.071 "nvme_admin": false, 00:33:46.071 "nvme_io": false, 00:33:46.071 "nvme_io_md": false, 00:33:46.071 "write_zeroes": true, 00:33:46.071 "zcopy": true, 00:33:46.071 "get_zone_info": false, 00:33:46.071 "zone_management": false, 00:33:46.071 "zone_append": false, 00:33:46.071 "compare": false, 00:33:46.071 "compare_and_write": false, 00:33:46.071 "abort": true, 00:33:46.071 "seek_hole": false, 00:33:46.071 "seek_data": false, 00:33:46.071 "copy": true, 00:33:46.071 "nvme_iov_md": false 00:33:46.071 }, 00:33:46.071 "memory_domains": [ 00:33:46.071 { 00:33:46.071 "dma_device_id": "system", 00:33:46.071 "dma_device_type": 1 00:33:46.071 }, 00:33:46.071 { 00:33:46.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:46.071 "dma_device_type": 2 00:33:46.071 } 00:33:46.071 ], 00:33:46.071 "driver_specific": {} 00:33:46.071 } 00:33:46.071 ] 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.071 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:46.330 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:46.330 "name": "Existed_Raid", 00:33:46.330 "uuid": "10a87438-17a4-4398-aece-9229bf85642e", 00:33:46.330 "strip_size_kb": 64, 00:33:46.330 "state": "configuring", 00:33:46.330 "raid_level": "raid5f", 00:33:46.330 "superblock": true, 00:33:46.330 "num_base_bdevs": 4, 00:33:46.330 "num_base_bdevs_discovered": 3, 00:33:46.330 "num_base_bdevs_operational": 4, 00:33:46.330 "base_bdevs_list": [ 00:33:46.330 { 00:33:46.330 "name": "BaseBdev1", 00:33:46.330 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:46.330 "is_configured": true, 00:33:46.330 "data_offset": 2048, 00:33:46.330 "data_size": 63488 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "name": "BaseBdev2", 00:33:46.330 "uuid": "22e417bd-589c-4c6c-af4d-e325b48b5226", 00:33:46.330 "is_configured": true, 00:33:46.330 "data_offset": 2048, 00:33:46.330 "data_size": 63488 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "name": "BaseBdev3", 00:33:46.330 "uuid": "36f9d6c7-7f23-467a-bd50-1e2a359b880d", 00:33:46.330 "is_configured": true, 00:33:46.330 "data_offset": 2048, 00:33:46.330 "data_size": 63488 00:33:46.330 }, 00:33:46.330 { 00:33:46.330 "name": "BaseBdev4", 00:33:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.330 "is_configured": false, 00:33:46.330 "data_offset": 0, 00:33:46.330 "data_size": 0 00:33:46.330 } 00:33:46.330 ] 00:33:46.330 }' 00:33:46.330 19:00:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:46.330 19:00:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:46.589 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:46.847 [2024-07-25 19:00:47.423740] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:46.847 [2024-07-25 19:00:47.424239] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:33:46.847 [2024-07-25 19:00:47.424360] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:46.847 [2024-07-25 19:00:47.424539] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:33:46.847 BaseBdev4 00:33:47.106 [2024-07-25 19:00:47.430482] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:33:47.106 [2024-07-25 19:00:47.430615] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:33:47.106 [2024-07-25 19:00:47.430870] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.106 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:33:47.106 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:47.106 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:47.106 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:47.106 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:47.106 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:47.106 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:47.364 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:47.622 [ 00:33:47.622 { 00:33:47.622 "name": "BaseBdev4", 00:33:47.622 "aliases": [ 00:33:47.622 "2fd1d495-379d-49a1-8e2b-2984cd5a0b74" 00:33:47.622 ], 00:33:47.622 "product_name": "Malloc disk", 00:33:47.622 "block_size": 512, 00:33:47.622 "num_blocks": 65536, 00:33:47.622 "uuid": "2fd1d495-379d-49a1-8e2b-2984cd5a0b74", 00:33:47.622 "assigned_rate_limits": { 00:33:47.622 "rw_ios_per_sec": 0, 00:33:47.622 "rw_mbytes_per_sec": 0, 00:33:47.622 "r_mbytes_per_sec": 0, 00:33:47.622 "w_mbytes_per_sec": 0 00:33:47.622 }, 00:33:47.622 "claimed": true, 00:33:47.622 "claim_type": "exclusive_write", 00:33:47.622 "zoned": false, 00:33:47.622 "supported_io_types": { 00:33:47.622 "read": true, 00:33:47.622 "write": true, 00:33:47.622 "unmap": true, 00:33:47.622 "flush": true, 00:33:47.622 "reset": true, 00:33:47.622 "nvme_admin": false, 00:33:47.622 "nvme_io": false, 00:33:47.622 "nvme_io_md": false, 00:33:47.622 "write_zeroes": true, 00:33:47.622 "zcopy": true, 00:33:47.622 "get_zone_info": false, 00:33:47.622 "zone_management": false, 00:33:47.622 "zone_append": false, 00:33:47.622 "compare": false, 00:33:47.622 "compare_and_write": false, 00:33:47.622 "abort": true, 00:33:47.622 "seek_hole": false, 00:33:47.622 "seek_data": false, 00:33:47.622 "copy": true, 00:33:47.622 "nvme_iov_md": false 00:33:47.622 }, 00:33:47.622 "memory_domains": [ 00:33:47.622 { 00:33:47.623 "dma_device_id": "system", 00:33:47.623 "dma_device_type": 1 00:33:47.623 }, 00:33:47.623 { 00:33:47.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.623 "dma_device_type": 2 00:33:47.623 } 00:33:47.623 ], 00:33:47.623 "driver_specific": {} 00:33:47.623 } 00:33:47.623 ] 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.623 19:00:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:47.881 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:47.881 "name": "Existed_Raid", 00:33:47.881 "uuid": "10a87438-17a4-4398-aece-9229bf85642e", 00:33:47.881 "strip_size_kb": 64, 00:33:47.881 "state": "online", 00:33:47.881 "raid_level": "raid5f", 00:33:47.881 "superblock": true, 00:33:47.881 "num_base_bdevs": 4, 00:33:47.881 "num_base_bdevs_discovered": 4, 00:33:47.881 "num_base_bdevs_operational": 4, 00:33:47.881 "base_bdevs_list": [ 00:33:47.881 { 00:33:47.882 "name": "BaseBdev1", 00:33:47.882 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:47.882 "is_configured": true, 00:33:47.882 "data_offset": 2048, 00:33:47.882 "data_size": 63488 00:33:47.882 }, 00:33:47.882 { 00:33:47.882 "name": "BaseBdev2", 00:33:47.882 "uuid": "22e417bd-589c-4c6c-af4d-e325b48b5226", 00:33:47.882 "is_configured": true, 00:33:47.882 "data_offset": 2048, 00:33:47.882 "data_size": 63488 00:33:47.882 }, 00:33:47.882 { 00:33:47.882 "name": "BaseBdev3", 00:33:47.882 "uuid": "36f9d6c7-7f23-467a-bd50-1e2a359b880d", 00:33:47.882 "is_configured": true, 00:33:47.882 "data_offset": 2048, 00:33:47.882 "data_size": 63488 00:33:47.882 }, 00:33:47.882 { 00:33:47.882 "name": "BaseBdev4", 00:33:47.882 "uuid": "2fd1d495-379d-49a1-8e2b-2984cd5a0b74", 00:33:47.882 "is_configured": true, 00:33:47.882 "data_offset": 2048, 00:33:47.882 "data_size": 63488 00:33:47.882 } 00:33:47.882 ] 00:33:47.882 }' 00:33:47.882 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:47.882 19:00:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:48.141 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:48.400 [2024-07-25 19:00:48.967642] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:48.660 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:48.660 "name": "Existed_Raid", 00:33:48.660 "aliases": [ 00:33:48.660 "10a87438-17a4-4398-aece-9229bf85642e" 00:33:48.660 ], 00:33:48.660 "product_name": "Raid Volume", 00:33:48.660 "block_size": 512, 00:33:48.660 "num_blocks": 190464, 00:33:48.660 "uuid": "10a87438-17a4-4398-aece-9229bf85642e", 00:33:48.660 "assigned_rate_limits": { 00:33:48.660 "rw_ios_per_sec": 0, 00:33:48.660 "rw_mbytes_per_sec": 0, 00:33:48.660 "r_mbytes_per_sec": 0, 00:33:48.660 "w_mbytes_per_sec": 0 00:33:48.660 }, 00:33:48.660 "claimed": false, 00:33:48.660 "zoned": false, 00:33:48.660 "supported_io_types": { 00:33:48.660 "read": true, 00:33:48.660 "write": true, 00:33:48.660 "unmap": false, 00:33:48.660 "flush": false, 00:33:48.660 "reset": true, 00:33:48.660 "nvme_admin": false, 00:33:48.660 "nvme_io": false, 00:33:48.660 "nvme_io_md": false, 00:33:48.660 "write_zeroes": true, 00:33:48.660 "zcopy": false, 00:33:48.660 "get_zone_info": false, 00:33:48.660 "zone_management": false, 00:33:48.660 "zone_append": false, 00:33:48.660 "compare": false, 00:33:48.660 "compare_and_write": false, 00:33:48.660 "abort": false, 00:33:48.660 "seek_hole": false, 00:33:48.660 "seek_data": false, 00:33:48.660 "copy": false, 00:33:48.660 "nvme_iov_md": false 00:33:48.660 }, 00:33:48.660 "driver_specific": { 00:33:48.660 "raid": { 00:33:48.660 "uuid": "10a87438-17a4-4398-aece-9229bf85642e", 00:33:48.660 "strip_size_kb": 64, 00:33:48.660 "state": "online", 00:33:48.660 "raid_level": "raid5f", 00:33:48.660 "superblock": true, 00:33:48.660 "num_base_bdevs": 4, 00:33:48.660 "num_base_bdevs_discovered": 4, 00:33:48.660 "num_base_bdevs_operational": 4, 00:33:48.660 "base_bdevs_list": [ 00:33:48.660 { 00:33:48.660 "name": "BaseBdev1", 00:33:48.660 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:48.660 "is_configured": true, 00:33:48.660 "data_offset": 2048, 00:33:48.660 "data_size": 63488 00:33:48.660 }, 00:33:48.660 { 00:33:48.660 "name": "BaseBdev2", 00:33:48.660 "uuid": "22e417bd-589c-4c6c-af4d-e325b48b5226", 00:33:48.660 "is_configured": true, 00:33:48.660 "data_offset": 2048, 00:33:48.660 "data_size": 63488 00:33:48.660 }, 00:33:48.660 { 00:33:48.660 "name": "BaseBdev3", 00:33:48.660 "uuid": "36f9d6c7-7f23-467a-bd50-1e2a359b880d", 00:33:48.660 "is_configured": true, 00:33:48.660 "data_offset": 2048, 00:33:48.660 "data_size": 63488 00:33:48.660 }, 00:33:48.660 { 00:33:48.660 "name": "BaseBdev4", 00:33:48.660 "uuid": "2fd1d495-379d-49a1-8e2b-2984cd5a0b74", 00:33:48.660 "is_configured": true, 00:33:48.660 "data_offset": 2048, 00:33:48.660 "data_size": 63488 00:33:48.660 } 00:33:48.660 ] 00:33:48.660 } 00:33:48.660 } 00:33:48.660 }' 00:33:48.660 19:00:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:48.660 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:48.660 BaseBdev2 00:33:48.660 BaseBdev3 00:33:48.660 BaseBdev4' 00:33:48.660 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:48.660 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:48.660 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:48.919 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:48.919 "name": "BaseBdev1", 00:33:48.919 "aliases": [ 00:33:48.919 "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a" 00:33:48.919 ], 00:33:48.919 "product_name": "Malloc disk", 00:33:48.919 "block_size": 512, 00:33:48.919 "num_blocks": 65536, 00:33:48.919 "uuid": "2fdca6ee-8e4c-4a4a-8d92-a20be20aaa9a", 00:33:48.919 "assigned_rate_limits": { 00:33:48.919 "rw_ios_per_sec": 0, 00:33:48.919 "rw_mbytes_per_sec": 0, 00:33:48.919 "r_mbytes_per_sec": 0, 00:33:48.919 "w_mbytes_per_sec": 0 00:33:48.919 }, 00:33:48.919 "claimed": true, 00:33:48.919 "claim_type": "exclusive_write", 00:33:48.920 "zoned": false, 00:33:48.920 "supported_io_types": { 00:33:48.920 "read": true, 00:33:48.920 "write": true, 00:33:48.920 "unmap": true, 00:33:48.920 "flush": true, 00:33:48.920 "reset": true, 00:33:48.920 "nvme_admin": false, 00:33:48.920 "nvme_io": false, 00:33:48.920 "nvme_io_md": false, 00:33:48.920 "write_zeroes": true, 00:33:48.920 "zcopy": true, 00:33:48.920 "get_zone_info": false, 00:33:48.920 "zone_management": false, 00:33:48.920 "zone_append": false, 00:33:48.920 "compare": false, 00:33:48.920 "compare_and_write": false, 00:33:48.920 "abort": true, 00:33:48.920 "seek_hole": false, 00:33:48.920 "seek_data": false, 00:33:48.920 "copy": true, 00:33:48.920 "nvme_iov_md": false 00:33:48.920 }, 00:33:48.920 "memory_domains": [ 00:33:48.920 { 00:33:48.920 "dma_device_id": "system", 00:33:48.920 "dma_device_type": 1 00:33:48.920 }, 00:33:48.920 { 00:33:48.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.920 "dma_device_type": 2 00:33:48.920 } 00:33:48.920 ], 00:33:48.920 "driver_specific": {} 00:33:48.920 }' 00:33:48.920 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:48.920 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:48.920 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:48.920 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:48.920 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:48.920 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:48.920 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:49.179 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:49.438 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:49.438 "name": "BaseBdev2", 00:33:49.438 "aliases": [ 00:33:49.438 "22e417bd-589c-4c6c-af4d-e325b48b5226" 00:33:49.438 ], 00:33:49.438 "product_name": "Malloc disk", 00:33:49.438 "block_size": 512, 00:33:49.438 "num_blocks": 65536, 00:33:49.438 "uuid": "22e417bd-589c-4c6c-af4d-e325b48b5226", 00:33:49.438 "assigned_rate_limits": { 00:33:49.438 "rw_ios_per_sec": 0, 00:33:49.438 "rw_mbytes_per_sec": 0, 00:33:49.438 "r_mbytes_per_sec": 0, 00:33:49.438 "w_mbytes_per_sec": 0 00:33:49.438 }, 00:33:49.438 "claimed": true, 00:33:49.438 "claim_type": "exclusive_write", 00:33:49.438 "zoned": false, 00:33:49.438 "supported_io_types": { 00:33:49.438 "read": true, 00:33:49.438 "write": true, 00:33:49.438 "unmap": true, 00:33:49.438 "flush": true, 00:33:49.438 "reset": true, 00:33:49.438 "nvme_admin": false, 00:33:49.438 "nvme_io": false, 00:33:49.438 "nvme_io_md": false, 00:33:49.438 "write_zeroes": true, 00:33:49.438 "zcopy": true, 00:33:49.438 "get_zone_info": false, 00:33:49.438 "zone_management": false, 00:33:49.438 "zone_append": false, 00:33:49.438 "compare": false, 00:33:49.438 "compare_and_write": false, 00:33:49.438 "abort": true, 00:33:49.438 "seek_hole": false, 00:33:49.438 "seek_data": false, 00:33:49.438 "copy": true, 00:33:49.438 "nvme_iov_md": false 00:33:49.438 }, 00:33:49.438 "memory_domains": [ 00:33:49.438 { 00:33:49.438 "dma_device_id": "system", 00:33:49.438 "dma_device_type": 1 00:33:49.438 }, 00:33:49.438 { 00:33:49.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:49.438 "dma_device_type": 2 00:33:49.438 } 00:33:49.438 ], 00:33:49.438 "driver_specific": {} 00:33:49.438 }' 00:33:49.438 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:49.438 19:00:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:49.697 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:49.956 "name": "BaseBdev3", 00:33:49.956 "aliases": [ 00:33:49.956 "36f9d6c7-7f23-467a-bd50-1e2a359b880d" 00:33:49.956 ], 00:33:49.956 "product_name": "Malloc disk", 00:33:49.956 "block_size": 512, 00:33:49.956 "num_blocks": 65536, 00:33:49.956 "uuid": "36f9d6c7-7f23-467a-bd50-1e2a359b880d", 00:33:49.956 "assigned_rate_limits": { 00:33:49.956 "rw_ios_per_sec": 0, 00:33:49.956 "rw_mbytes_per_sec": 0, 00:33:49.956 "r_mbytes_per_sec": 0, 00:33:49.956 "w_mbytes_per_sec": 0 00:33:49.956 }, 00:33:49.956 "claimed": true, 00:33:49.956 "claim_type": "exclusive_write", 00:33:49.956 "zoned": false, 00:33:49.956 "supported_io_types": { 00:33:49.956 "read": true, 00:33:49.956 "write": true, 00:33:49.956 "unmap": true, 00:33:49.956 "flush": true, 00:33:49.956 "reset": true, 00:33:49.956 "nvme_admin": false, 00:33:49.956 "nvme_io": false, 00:33:49.956 "nvme_io_md": false, 00:33:49.956 "write_zeroes": true, 00:33:49.956 "zcopy": true, 00:33:49.956 "get_zone_info": false, 00:33:49.956 "zone_management": false, 00:33:49.956 "zone_append": false, 00:33:49.956 "compare": false, 00:33:49.956 "compare_and_write": false, 00:33:49.956 "abort": true, 00:33:49.956 "seek_hole": false, 00:33:49.956 "seek_data": false, 00:33:49.956 "copy": true, 00:33:49.956 "nvme_iov_md": false 00:33:49.956 }, 00:33:49.956 "memory_domains": [ 00:33:49.956 { 00:33:49.956 "dma_device_id": "system", 00:33:49.956 "dma_device_type": 1 00:33:49.956 }, 00:33:49.956 { 00:33:49.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:49.956 "dma_device_type": 2 00:33:49.956 } 00:33:49.956 ], 00:33:49.956 "driver_specific": {} 00:33:49.956 }' 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:49.956 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.215 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.473 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:50.473 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:50.473 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:50.473 19:00:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:50.731 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:50.731 "name": "BaseBdev4", 00:33:50.731 "aliases": [ 00:33:50.731 "2fd1d495-379d-49a1-8e2b-2984cd5a0b74" 00:33:50.731 ], 00:33:50.732 "product_name": "Malloc disk", 00:33:50.732 "block_size": 512, 00:33:50.732 "num_blocks": 65536, 00:33:50.732 "uuid": "2fd1d495-379d-49a1-8e2b-2984cd5a0b74", 00:33:50.732 "assigned_rate_limits": { 00:33:50.732 "rw_ios_per_sec": 0, 00:33:50.732 "rw_mbytes_per_sec": 0, 00:33:50.732 "r_mbytes_per_sec": 0, 00:33:50.732 "w_mbytes_per_sec": 0 00:33:50.732 }, 00:33:50.732 "claimed": true, 00:33:50.732 "claim_type": "exclusive_write", 00:33:50.732 "zoned": false, 00:33:50.732 "supported_io_types": { 00:33:50.732 "read": true, 00:33:50.732 "write": true, 00:33:50.732 "unmap": true, 00:33:50.732 "flush": true, 00:33:50.732 "reset": true, 00:33:50.732 "nvme_admin": false, 00:33:50.732 "nvme_io": false, 00:33:50.732 "nvme_io_md": false, 00:33:50.732 "write_zeroes": true, 00:33:50.732 "zcopy": true, 00:33:50.732 "get_zone_info": false, 00:33:50.732 "zone_management": false, 00:33:50.732 "zone_append": false, 00:33:50.732 "compare": false, 00:33:50.732 "compare_and_write": false, 00:33:50.732 "abort": true, 00:33:50.732 "seek_hole": false, 00:33:50.732 "seek_data": false, 00:33:50.732 "copy": true, 00:33:50.732 "nvme_iov_md": false 00:33:50.732 }, 00:33:50.732 "memory_domains": [ 00:33:50.732 { 00:33:50.732 "dma_device_id": "system", 00:33:50.732 "dma_device_type": 1 00:33:50.732 }, 00:33:50.732 { 00:33:50.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.732 "dma_device_type": 2 00:33:50.732 } 00:33:50.732 ], 00:33:50.732 "driver_specific": {} 00:33:50.732 }' 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.732 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:50.990 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:50.990 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.990 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:50.990 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:50.991 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:51.249 [2024-07-25 19:00:51.688060] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.249 19:00:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.508 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:51.508 "name": "Existed_Raid", 00:33:51.508 "uuid": "10a87438-17a4-4398-aece-9229bf85642e", 00:33:51.508 "strip_size_kb": 64, 00:33:51.508 "state": "online", 00:33:51.508 "raid_level": "raid5f", 00:33:51.508 "superblock": true, 00:33:51.508 "num_base_bdevs": 4, 00:33:51.508 "num_base_bdevs_discovered": 3, 00:33:51.508 "num_base_bdevs_operational": 3, 00:33:51.508 "base_bdevs_list": [ 00:33:51.508 { 00:33:51.508 "name": null, 00:33:51.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.508 "is_configured": false, 00:33:51.508 "data_offset": 2048, 00:33:51.508 "data_size": 63488 00:33:51.508 }, 00:33:51.508 { 00:33:51.508 "name": "BaseBdev2", 00:33:51.508 "uuid": "22e417bd-589c-4c6c-af4d-e325b48b5226", 00:33:51.508 "is_configured": true, 00:33:51.508 "data_offset": 2048, 00:33:51.508 "data_size": 63488 00:33:51.508 }, 00:33:51.508 { 00:33:51.508 "name": "BaseBdev3", 00:33:51.508 "uuid": "36f9d6c7-7f23-467a-bd50-1e2a359b880d", 00:33:51.508 "is_configured": true, 00:33:51.508 "data_offset": 2048, 00:33:51.508 "data_size": 63488 00:33:51.508 }, 00:33:51.508 { 00:33:51.508 "name": "BaseBdev4", 00:33:51.508 "uuid": "2fd1d495-379d-49a1-8e2b-2984cd5a0b74", 00:33:51.508 "is_configured": true, 00:33:51.508 "data_offset": 2048, 00:33:51.508 "data_size": 63488 00:33:51.508 } 00:33:51.508 ] 00:33:51.508 }' 00:33:51.508 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:51.508 19:00:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.075 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:52.075 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:52.075 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.075 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:52.334 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:52.334 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:52.334 19:00:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:52.334 [2024-07-25 19:00:52.909072] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:52.334 [2024-07-25 19:00:52.909449] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:52.592 [2024-07-25 19:00:52.996801] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:52.592 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:52.592 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:52.592 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.592 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:52.851 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:52.851 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:52.851 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:52.851 [2024-07-25 19:00:53.364891] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:53.109 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:53.109 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:53.109 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.109 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:53.367 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:53.367 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:53.367 19:00:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:33:53.626 [2024-07-25 19:00:53.958101] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:53.626 [2024-07-25 19:00:53.958310] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:33:53.626 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:53.626 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:53.626 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.626 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:53.884 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:53.884 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:53.884 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:33:53.884 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:33:53.884 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:53.884 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:54.143 BaseBdev2 00:33:54.143 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:33:54.143 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:54.143 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:54.143 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:54.143 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:54.143 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:54.143 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:54.402 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:54.402 [ 00:33:54.402 { 00:33:54.402 "name": "BaseBdev2", 00:33:54.402 "aliases": [ 00:33:54.402 "37962110-6d02-4c16-bd66-beb820ee0201" 00:33:54.402 ], 00:33:54.402 "product_name": "Malloc disk", 00:33:54.402 "block_size": 512, 00:33:54.402 "num_blocks": 65536, 00:33:54.402 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:33:54.402 "assigned_rate_limits": { 00:33:54.402 "rw_ios_per_sec": 0, 00:33:54.402 "rw_mbytes_per_sec": 0, 00:33:54.402 "r_mbytes_per_sec": 0, 00:33:54.402 "w_mbytes_per_sec": 0 00:33:54.402 }, 00:33:54.402 "claimed": false, 00:33:54.402 "zoned": false, 00:33:54.402 "supported_io_types": { 00:33:54.402 "read": true, 00:33:54.402 "write": true, 00:33:54.402 "unmap": true, 00:33:54.402 "flush": true, 00:33:54.402 "reset": true, 00:33:54.402 "nvme_admin": false, 00:33:54.402 "nvme_io": false, 00:33:54.402 "nvme_io_md": false, 00:33:54.402 "write_zeroes": true, 00:33:54.402 "zcopy": true, 00:33:54.402 "get_zone_info": false, 00:33:54.402 "zone_management": false, 00:33:54.402 "zone_append": false, 00:33:54.402 "compare": false, 00:33:54.402 "compare_and_write": false, 00:33:54.402 "abort": true, 00:33:54.402 "seek_hole": false, 00:33:54.402 "seek_data": false, 00:33:54.402 "copy": true, 00:33:54.402 "nvme_iov_md": false 00:33:54.402 }, 00:33:54.402 "memory_domains": [ 00:33:54.402 { 00:33:54.402 "dma_device_id": "system", 00:33:54.402 "dma_device_type": 1 00:33:54.402 }, 00:33:54.402 { 00:33:54.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.402 "dma_device_type": 2 00:33:54.402 } 00:33:54.402 ], 00:33:54.402 "driver_specific": {} 00:33:54.402 } 00:33:54.402 ] 00:33:54.402 19:00:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:54.402 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:54.402 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:54.402 19:00:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:54.661 BaseBdev3 00:33:54.661 19:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:33:54.661 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:54.661 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:54.661 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:54.661 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:54.661 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:54.661 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:54.920 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:54.920 [ 00:33:54.920 { 00:33:54.920 "name": "BaseBdev3", 00:33:54.920 "aliases": [ 00:33:54.920 "76651aa2-01e7-4138-89e3-5801f27ed46e" 00:33:54.920 ], 00:33:54.920 "product_name": "Malloc disk", 00:33:54.920 "block_size": 512, 00:33:54.920 "num_blocks": 65536, 00:33:54.920 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:33:54.920 "assigned_rate_limits": { 00:33:54.920 "rw_ios_per_sec": 0, 00:33:54.920 "rw_mbytes_per_sec": 0, 00:33:54.920 "r_mbytes_per_sec": 0, 00:33:54.920 "w_mbytes_per_sec": 0 00:33:54.920 }, 00:33:54.920 "claimed": false, 00:33:54.920 "zoned": false, 00:33:54.920 "supported_io_types": { 00:33:54.920 "read": true, 00:33:54.920 "write": true, 00:33:54.920 "unmap": true, 00:33:54.920 "flush": true, 00:33:54.920 "reset": true, 00:33:54.920 "nvme_admin": false, 00:33:54.920 "nvme_io": false, 00:33:54.920 "nvme_io_md": false, 00:33:54.920 "write_zeroes": true, 00:33:54.920 "zcopy": true, 00:33:54.920 "get_zone_info": false, 00:33:54.920 "zone_management": false, 00:33:54.920 "zone_append": false, 00:33:54.920 "compare": false, 00:33:54.920 "compare_and_write": false, 00:33:54.920 "abort": true, 00:33:54.920 "seek_hole": false, 00:33:54.920 "seek_data": false, 00:33:54.920 "copy": true, 00:33:54.920 "nvme_iov_md": false 00:33:54.920 }, 00:33:54.920 "memory_domains": [ 00:33:54.920 { 00:33:54.920 "dma_device_id": "system", 00:33:54.920 "dma_device_type": 1 00:33:54.920 }, 00:33:54.920 { 00:33:54.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.920 "dma_device_type": 2 00:33:54.920 } 00:33:54.920 ], 00:33:54.920 "driver_specific": {} 00:33:54.920 } 00:33:54.920 ] 00:33:54.920 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:54.920 19:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:54.920 19:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:54.920 19:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:55.179 BaseBdev4 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:55.437 19:00:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:55.695 [ 00:33:55.695 { 00:33:55.695 "name": "BaseBdev4", 00:33:55.695 "aliases": [ 00:33:55.695 "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a" 00:33:55.695 ], 00:33:55.695 "product_name": "Malloc disk", 00:33:55.695 "block_size": 512, 00:33:55.695 "num_blocks": 65536, 00:33:55.695 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:33:55.695 "assigned_rate_limits": { 00:33:55.695 "rw_ios_per_sec": 0, 00:33:55.695 "rw_mbytes_per_sec": 0, 00:33:55.695 "r_mbytes_per_sec": 0, 00:33:55.695 "w_mbytes_per_sec": 0 00:33:55.695 }, 00:33:55.695 "claimed": false, 00:33:55.695 "zoned": false, 00:33:55.695 "supported_io_types": { 00:33:55.695 "read": true, 00:33:55.695 "write": true, 00:33:55.695 "unmap": true, 00:33:55.695 "flush": true, 00:33:55.695 "reset": true, 00:33:55.695 "nvme_admin": false, 00:33:55.695 "nvme_io": false, 00:33:55.695 "nvme_io_md": false, 00:33:55.695 "write_zeroes": true, 00:33:55.695 "zcopy": true, 00:33:55.695 "get_zone_info": false, 00:33:55.695 "zone_management": false, 00:33:55.695 "zone_append": false, 00:33:55.695 "compare": false, 00:33:55.695 "compare_and_write": false, 00:33:55.695 "abort": true, 00:33:55.695 "seek_hole": false, 00:33:55.695 "seek_data": false, 00:33:55.695 "copy": true, 00:33:55.695 "nvme_iov_md": false 00:33:55.695 }, 00:33:55.695 "memory_domains": [ 00:33:55.695 { 00:33:55.695 "dma_device_id": "system", 00:33:55.695 "dma_device_type": 1 00:33:55.695 }, 00:33:55.695 { 00:33:55.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:55.695 "dma_device_type": 2 00:33:55.695 } 00:33:55.695 ], 00:33:55.695 "driver_specific": {} 00:33:55.695 } 00:33:55.695 ] 00:33:55.695 19:00:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:55.695 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:55.696 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:55.696 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:55.696 [2024-07-25 19:00:56.272248] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:55.696 [2024-07-25 19:00:56.272510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:55.696 [2024-07-25 19:00:56.272656] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:55.696 [2024-07-25 19:00:56.274980] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:55.696 [2024-07-25 19:00:56.275161] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:55.954 "name": "Existed_Raid", 00:33:55.954 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:33:55.954 "strip_size_kb": 64, 00:33:55.954 "state": "configuring", 00:33:55.954 "raid_level": "raid5f", 00:33:55.954 "superblock": true, 00:33:55.954 "num_base_bdevs": 4, 00:33:55.954 "num_base_bdevs_discovered": 3, 00:33:55.954 "num_base_bdevs_operational": 4, 00:33:55.954 "base_bdevs_list": [ 00:33:55.954 { 00:33:55.954 "name": "BaseBdev1", 00:33:55.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.954 "is_configured": false, 00:33:55.954 "data_offset": 0, 00:33:55.954 "data_size": 0 00:33:55.954 }, 00:33:55.954 { 00:33:55.954 "name": "BaseBdev2", 00:33:55.954 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:33:55.954 "is_configured": true, 00:33:55.954 "data_offset": 2048, 00:33:55.954 "data_size": 63488 00:33:55.954 }, 00:33:55.954 { 00:33:55.954 "name": "BaseBdev3", 00:33:55.954 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:33:55.954 "is_configured": true, 00:33:55.954 "data_offset": 2048, 00:33:55.954 "data_size": 63488 00:33:55.954 }, 00:33:55.954 { 00:33:55.954 "name": "BaseBdev4", 00:33:55.954 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:33:55.954 "is_configured": true, 00:33:55.954 "data_offset": 2048, 00:33:55.954 "data_size": 63488 00:33:55.954 } 00:33:55.954 ] 00:33:55.954 }' 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:55.954 19:00:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.521 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:56.780 [2024-07-25 19:00:57.288386] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.780 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:57.038 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:57.038 "name": "Existed_Raid", 00:33:57.038 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:33:57.038 "strip_size_kb": 64, 00:33:57.038 "state": "configuring", 00:33:57.038 "raid_level": "raid5f", 00:33:57.038 "superblock": true, 00:33:57.038 "num_base_bdevs": 4, 00:33:57.038 "num_base_bdevs_discovered": 2, 00:33:57.038 "num_base_bdevs_operational": 4, 00:33:57.038 "base_bdevs_list": [ 00:33:57.038 { 00:33:57.038 "name": "BaseBdev1", 00:33:57.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:57.038 "is_configured": false, 00:33:57.038 "data_offset": 0, 00:33:57.038 "data_size": 0 00:33:57.038 }, 00:33:57.038 { 00:33:57.038 "name": null, 00:33:57.038 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:33:57.038 "is_configured": false, 00:33:57.038 "data_offset": 2048, 00:33:57.038 "data_size": 63488 00:33:57.038 }, 00:33:57.038 { 00:33:57.038 "name": "BaseBdev3", 00:33:57.038 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:33:57.038 "is_configured": true, 00:33:57.038 "data_offset": 2048, 00:33:57.038 "data_size": 63488 00:33:57.038 }, 00:33:57.038 { 00:33:57.038 "name": "BaseBdev4", 00:33:57.038 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:33:57.038 "is_configured": true, 00:33:57.038 "data_offset": 2048, 00:33:57.038 "data_size": 63488 00:33:57.038 } 00:33:57.038 ] 00:33:57.038 }' 00:33:57.038 19:00:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:57.038 19:00:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.605 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.605 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:57.864 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:33:57.864 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:58.122 [2024-07-25 19:00:58.630600] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:58.122 BaseBdev1 00:33:58.122 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:33:58.122 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:58.122 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:58.122 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:58.122 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:58.122 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:58.122 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:58.381 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:58.641 [ 00:33:58.641 { 00:33:58.641 "name": "BaseBdev1", 00:33:58.641 "aliases": [ 00:33:58.641 "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4" 00:33:58.641 ], 00:33:58.641 "product_name": "Malloc disk", 00:33:58.641 "block_size": 512, 00:33:58.641 "num_blocks": 65536, 00:33:58.641 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:33:58.641 "assigned_rate_limits": { 00:33:58.641 "rw_ios_per_sec": 0, 00:33:58.641 "rw_mbytes_per_sec": 0, 00:33:58.641 "r_mbytes_per_sec": 0, 00:33:58.641 "w_mbytes_per_sec": 0 00:33:58.641 }, 00:33:58.641 "claimed": true, 00:33:58.641 "claim_type": "exclusive_write", 00:33:58.641 "zoned": false, 00:33:58.641 "supported_io_types": { 00:33:58.641 "read": true, 00:33:58.641 "write": true, 00:33:58.641 "unmap": true, 00:33:58.641 "flush": true, 00:33:58.641 "reset": true, 00:33:58.641 "nvme_admin": false, 00:33:58.641 "nvme_io": false, 00:33:58.641 "nvme_io_md": false, 00:33:58.641 "write_zeroes": true, 00:33:58.641 "zcopy": true, 00:33:58.641 "get_zone_info": false, 00:33:58.641 "zone_management": false, 00:33:58.641 "zone_append": false, 00:33:58.641 "compare": false, 00:33:58.641 "compare_and_write": false, 00:33:58.641 "abort": true, 00:33:58.641 "seek_hole": false, 00:33:58.641 "seek_data": false, 00:33:58.641 "copy": true, 00:33:58.641 "nvme_iov_md": false 00:33:58.641 }, 00:33:58.641 "memory_domains": [ 00:33:58.641 { 00:33:58.641 "dma_device_id": "system", 00:33:58.641 "dma_device_type": 1 00:33:58.641 }, 00:33:58.641 { 00:33:58.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:58.641 "dma_device_type": 2 00:33:58.641 } 00:33:58.641 ], 00:33:58.641 "driver_specific": {} 00:33:58.641 } 00:33:58.641 ] 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:58.641 19:00:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:58.641 19:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.641 19:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:58.899 19:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:58.899 "name": "Existed_Raid", 00:33:58.899 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:33:58.899 "strip_size_kb": 64, 00:33:58.899 "state": "configuring", 00:33:58.899 "raid_level": "raid5f", 00:33:58.899 "superblock": true, 00:33:58.899 "num_base_bdevs": 4, 00:33:58.899 "num_base_bdevs_discovered": 3, 00:33:58.899 "num_base_bdevs_operational": 4, 00:33:58.899 "base_bdevs_list": [ 00:33:58.899 { 00:33:58.899 "name": "BaseBdev1", 00:33:58.899 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:33:58.899 "is_configured": true, 00:33:58.899 "data_offset": 2048, 00:33:58.899 "data_size": 63488 00:33:58.899 }, 00:33:58.899 { 00:33:58.899 "name": null, 00:33:58.899 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:33:58.899 "is_configured": false, 00:33:58.899 "data_offset": 2048, 00:33:58.899 "data_size": 63488 00:33:58.899 }, 00:33:58.899 { 00:33:58.899 "name": "BaseBdev3", 00:33:58.899 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:33:58.899 "is_configured": true, 00:33:58.899 "data_offset": 2048, 00:33:58.899 "data_size": 63488 00:33:58.899 }, 00:33:58.899 { 00:33:58.899 "name": "BaseBdev4", 00:33:58.899 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:33:58.899 "is_configured": true, 00:33:58.899 "data_offset": 2048, 00:33:58.899 "data_size": 63488 00:33:58.899 } 00:33:58.899 ] 00:33:58.899 }' 00:33:58.899 19:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:58.899 19:00:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.465 19:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.465 19:00:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:59.723 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:33:59.723 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:34:00.009 [2024-07-25 19:01:00.354971] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:00.009 "name": "Existed_Raid", 00:34:00.009 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:34:00.009 "strip_size_kb": 64, 00:34:00.009 "state": "configuring", 00:34:00.009 "raid_level": "raid5f", 00:34:00.009 "superblock": true, 00:34:00.009 "num_base_bdevs": 4, 00:34:00.009 "num_base_bdevs_discovered": 2, 00:34:00.009 "num_base_bdevs_operational": 4, 00:34:00.009 "base_bdevs_list": [ 00:34:00.009 { 00:34:00.009 "name": "BaseBdev1", 00:34:00.009 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:00.009 "is_configured": true, 00:34:00.009 "data_offset": 2048, 00:34:00.009 "data_size": 63488 00:34:00.009 }, 00:34:00.009 { 00:34:00.009 "name": null, 00:34:00.009 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:34:00.009 "is_configured": false, 00:34:00.009 "data_offset": 2048, 00:34:00.009 "data_size": 63488 00:34:00.009 }, 00:34:00.009 { 00:34:00.009 "name": null, 00:34:00.009 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:34:00.009 "is_configured": false, 00:34:00.009 "data_offset": 2048, 00:34:00.009 "data_size": 63488 00:34:00.009 }, 00:34:00.009 { 00:34:00.009 "name": "BaseBdev4", 00:34:00.009 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:34:00.009 "is_configured": true, 00:34:00.009 "data_offset": 2048, 00:34:00.009 "data_size": 63488 00:34:00.009 } 00:34:00.009 ] 00:34:00.009 }' 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:00.009 19:01:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.590 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.590 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:00.848 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:34:00.848 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:01.105 [2024-07-25 19:01:01.543236] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:01.105 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:01.106 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:01.106 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.106 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:01.364 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:01.364 "name": "Existed_Raid", 00:34:01.364 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:34:01.364 "strip_size_kb": 64, 00:34:01.364 "state": "configuring", 00:34:01.364 "raid_level": "raid5f", 00:34:01.364 "superblock": true, 00:34:01.364 "num_base_bdevs": 4, 00:34:01.364 "num_base_bdevs_discovered": 3, 00:34:01.364 "num_base_bdevs_operational": 4, 00:34:01.364 "base_bdevs_list": [ 00:34:01.364 { 00:34:01.364 "name": "BaseBdev1", 00:34:01.364 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:01.364 "is_configured": true, 00:34:01.364 "data_offset": 2048, 00:34:01.364 "data_size": 63488 00:34:01.364 }, 00:34:01.364 { 00:34:01.364 "name": null, 00:34:01.364 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:34:01.364 "is_configured": false, 00:34:01.364 "data_offset": 2048, 00:34:01.364 "data_size": 63488 00:34:01.364 }, 00:34:01.364 { 00:34:01.364 "name": "BaseBdev3", 00:34:01.364 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:34:01.364 "is_configured": true, 00:34:01.364 "data_offset": 2048, 00:34:01.364 "data_size": 63488 00:34:01.364 }, 00:34:01.364 { 00:34:01.364 "name": "BaseBdev4", 00:34:01.364 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:34:01.364 "is_configured": true, 00:34:01.364 "data_offset": 2048, 00:34:01.364 "data_size": 63488 00:34:01.364 } 00:34:01.364 ] 00:34:01.364 }' 00:34:01.364 19:01:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:01.364 19:01:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.935 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:01.935 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:02.197 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:34:02.197 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:02.197 [2024-07-25 19:01:02.683464] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:02.456 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:02.456 "name": "Existed_Raid", 00:34:02.457 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:34:02.457 "strip_size_kb": 64, 00:34:02.457 "state": "configuring", 00:34:02.457 "raid_level": "raid5f", 00:34:02.457 "superblock": true, 00:34:02.457 "num_base_bdevs": 4, 00:34:02.457 "num_base_bdevs_discovered": 2, 00:34:02.457 "num_base_bdevs_operational": 4, 00:34:02.457 "base_bdevs_list": [ 00:34:02.457 { 00:34:02.457 "name": null, 00:34:02.457 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:02.457 "is_configured": false, 00:34:02.457 "data_offset": 2048, 00:34:02.457 "data_size": 63488 00:34:02.457 }, 00:34:02.457 { 00:34:02.457 "name": null, 00:34:02.457 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:34:02.457 "is_configured": false, 00:34:02.457 "data_offset": 2048, 00:34:02.457 "data_size": 63488 00:34:02.457 }, 00:34:02.457 { 00:34:02.457 "name": "BaseBdev3", 00:34:02.457 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:34:02.457 "is_configured": true, 00:34:02.457 "data_offset": 2048, 00:34:02.457 "data_size": 63488 00:34:02.457 }, 00:34:02.457 { 00:34:02.457 "name": "BaseBdev4", 00:34:02.457 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:34:02.457 "is_configured": true, 00:34:02.457 "data_offset": 2048, 00:34:02.457 "data_size": 63488 00:34:02.457 } 00:34:02.457 ] 00:34:02.457 }' 00:34:02.457 19:01:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:02.457 19:01:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.024 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.024 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:03.283 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:34:03.283 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:03.542 [2024-07-25 19:01:03.920502] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.542 19:01:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:03.802 19:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:03.802 "name": "Existed_Raid", 00:34:03.802 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:34:03.802 "strip_size_kb": 64, 00:34:03.802 "state": "configuring", 00:34:03.802 "raid_level": "raid5f", 00:34:03.802 "superblock": true, 00:34:03.802 "num_base_bdevs": 4, 00:34:03.802 "num_base_bdevs_discovered": 3, 00:34:03.802 "num_base_bdevs_operational": 4, 00:34:03.802 "base_bdevs_list": [ 00:34:03.802 { 00:34:03.802 "name": null, 00:34:03.802 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:03.802 "is_configured": false, 00:34:03.802 "data_offset": 2048, 00:34:03.802 "data_size": 63488 00:34:03.802 }, 00:34:03.802 { 00:34:03.802 "name": "BaseBdev2", 00:34:03.802 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:34:03.802 "is_configured": true, 00:34:03.802 "data_offset": 2048, 00:34:03.802 "data_size": 63488 00:34:03.802 }, 00:34:03.802 { 00:34:03.802 "name": "BaseBdev3", 00:34:03.802 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:34:03.802 "is_configured": true, 00:34:03.802 "data_offset": 2048, 00:34:03.802 "data_size": 63488 00:34:03.802 }, 00:34:03.802 { 00:34:03.802 "name": "BaseBdev4", 00:34:03.802 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:34:03.802 "is_configured": true, 00:34:03.802 "data_offset": 2048, 00:34:03.802 "data_size": 63488 00:34:03.802 } 00:34:03.802 ] 00:34:03.802 }' 00:34:03.802 19:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:03.802 19:01:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.372 19:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.372 19:01:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:04.631 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:34:04.631 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.631 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:04.890 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u dbb25a09-e025-4d55-b1ee-07efa0cf0ed4 00:34:05.149 [2024-07-25 19:01:05.486016] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:05.149 [2024-07-25 19:01:05.486479] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:34:05.150 [2024-07-25 19:01:05.486609] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:05.150 [2024-07-25 19:01:05.486742] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:05.150 NewBaseBdev 00:34:05.150 [2024-07-25 19:01:05.491650] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:34:05.150 [2024-07-25 19:01:05.491781] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013480 00:34:05.150 [2024-07-25 19:01:05.492065] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:05.150 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:05.409 [ 00:34:05.409 { 00:34:05.409 "name": "NewBaseBdev", 00:34:05.409 "aliases": [ 00:34:05.409 "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4" 00:34:05.409 ], 00:34:05.409 "product_name": "Malloc disk", 00:34:05.409 "block_size": 512, 00:34:05.409 "num_blocks": 65536, 00:34:05.409 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:05.409 "assigned_rate_limits": { 00:34:05.409 "rw_ios_per_sec": 0, 00:34:05.409 "rw_mbytes_per_sec": 0, 00:34:05.409 "r_mbytes_per_sec": 0, 00:34:05.409 "w_mbytes_per_sec": 0 00:34:05.409 }, 00:34:05.409 "claimed": true, 00:34:05.409 "claim_type": "exclusive_write", 00:34:05.409 "zoned": false, 00:34:05.409 "supported_io_types": { 00:34:05.409 "read": true, 00:34:05.409 "write": true, 00:34:05.409 "unmap": true, 00:34:05.409 "flush": true, 00:34:05.409 "reset": true, 00:34:05.409 "nvme_admin": false, 00:34:05.409 "nvme_io": false, 00:34:05.409 "nvme_io_md": false, 00:34:05.409 "write_zeroes": true, 00:34:05.409 "zcopy": true, 00:34:05.409 "get_zone_info": false, 00:34:05.409 "zone_management": false, 00:34:05.409 "zone_append": false, 00:34:05.409 "compare": false, 00:34:05.409 "compare_and_write": false, 00:34:05.409 "abort": true, 00:34:05.409 "seek_hole": false, 00:34:05.409 "seek_data": false, 00:34:05.409 "copy": true, 00:34:05.409 "nvme_iov_md": false 00:34:05.409 }, 00:34:05.409 "memory_domains": [ 00:34:05.409 { 00:34:05.409 "dma_device_id": "system", 00:34:05.409 "dma_device_type": 1 00:34:05.409 }, 00:34:05.409 { 00:34:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:05.409 "dma_device_type": 2 00:34:05.409 } 00:34:05.409 ], 00:34:05.409 "driver_specific": {} 00:34:05.409 } 00:34:05.409 ] 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:05.409 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:05.410 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:05.410 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.410 19:01:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:05.669 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:05.669 "name": "Existed_Raid", 00:34:05.669 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:34:05.669 "strip_size_kb": 64, 00:34:05.669 "state": "online", 00:34:05.669 "raid_level": "raid5f", 00:34:05.669 "superblock": true, 00:34:05.669 "num_base_bdevs": 4, 00:34:05.669 "num_base_bdevs_discovered": 4, 00:34:05.669 "num_base_bdevs_operational": 4, 00:34:05.669 "base_bdevs_list": [ 00:34:05.669 { 00:34:05.669 "name": "NewBaseBdev", 00:34:05.669 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:05.669 "is_configured": true, 00:34:05.669 "data_offset": 2048, 00:34:05.669 "data_size": 63488 00:34:05.669 }, 00:34:05.669 { 00:34:05.669 "name": "BaseBdev2", 00:34:05.669 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:34:05.669 "is_configured": true, 00:34:05.669 "data_offset": 2048, 00:34:05.669 "data_size": 63488 00:34:05.669 }, 00:34:05.669 { 00:34:05.669 "name": "BaseBdev3", 00:34:05.669 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:34:05.669 "is_configured": true, 00:34:05.669 "data_offset": 2048, 00:34:05.669 "data_size": 63488 00:34:05.669 }, 00:34:05.669 { 00:34:05.669 "name": "BaseBdev4", 00:34:05.669 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:34:05.669 "is_configured": true, 00:34:05.669 "data_offset": 2048, 00:34:05.669 "data_size": 63488 00:34:05.669 } 00:34:05.669 ] 00:34:05.669 }' 00:34:05.669 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:05.669 19:01:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:06.237 [2024-07-25 19:01:06.755892] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:06.237 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:06.237 "name": "Existed_Raid", 00:34:06.237 "aliases": [ 00:34:06.237 "23d3b451-2fb9-439e-a52c-234d77cfd57f" 00:34:06.237 ], 00:34:06.237 "product_name": "Raid Volume", 00:34:06.237 "block_size": 512, 00:34:06.237 "num_blocks": 190464, 00:34:06.237 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:34:06.237 "assigned_rate_limits": { 00:34:06.237 "rw_ios_per_sec": 0, 00:34:06.237 "rw_mbytes_per_sec": 0, 00:34:06.237 "r_mbytes_per_sec": 0, 00:34:06.237 "w_mbytes_per_sec": 0 00:34:06.237 }, 00:34:06.237 "claimed": false, 00:34:06.237 "zoned": false, 00:34:06.238 "supported_io_types": { 00:34:06.238 "read": true, 00:34:06.238 "write": true, 00:34:06.238 "unmap": false, 00:34:06.238 "flush": false, 00:34:06.238 "reset": true, 00:34:06.238 "nvme_admin": false, 00:34:06.238 "nvme_io": false, 00:34:06.238 "nvme_io_md": false, 00:34:06.238 "write_zeroes": true, 00:34:06.238 "zcopy": false, 00:34:06.238 "get_zone_info": false, 00:34:06.238 "zone_management": false, 00:34:06.238 "zone_append": false, 00:34:06.238 "compare": false, 00:34:06.238 "compare_and_write": false, 00:34:06.238 "abort": false, 00:34:06.238 "seek_hole": false, 00:34:06.238 "seek_data": false, 00:34:06.238 "copy": false, 00:34:06.238 "nvme_iov_md": false 00:34:06.238 }, 00:34:06.238 "driver_specific": { 00:34:06.238 "raid": { 00:34:06.238 "uuid": "23d3b451-2fb9-439e-a52c-234d77cfd57f", 00:34:06.238 "strip_size_kb": 64, 00:34:06.238 "state": "online", 00:34:06.238 "raid_level": "raid5f", 00:34:06.238 "superblock": true, 00:34:06.238 "num_base_bdevs": 4, 00:34:06.238 "num_base_bdevs_discovered": 4, 00:34:06.238 "num_base_bdevs_operational": 4, 00:34:06.238 "base_bdevs_list": [ 00:34:06.238 { 00:34:06.238 "name": "NewBaseBdev", 00:34:06.238 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:06.238 "is_configured": true, 00:34:06.238 "data_offset": 2048, 00:34:06.238 "data_size": 63488 00:34:06.238 }, 00:34:06.238 { 00:34:06.238 "name": "BaseBdev2", 00:34:06.238 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:34:06.238 "is_configured": true, 00:34:06.238 "data_offset": 2048, 00:34:06.238 "data_size": 63488 00:34:06.238 }, 00:34:06.238 { 00:34:06.238 "name": "BaseBdev3", 00:34:06.238 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:34:06.238 "is_configured": true, 00:34:06.238 "data_offset": 2048, 00:34:06.238 "data_size": 63488 00:34:06.238 }, 00:34:06.238 { 00:34:06.238 "name": "BaseBdev4", 00:34:06.238 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:34:06.238 "is_configured": true, 00:34:06.238 "data_offset": 2048, 00:34:06.238 "data_size": 63488 00:34:06.238 } 00:34:06.238 ] 00:34:06.238 } 00:34:06.238 } 00:34:06.238 }' 00:34:06.238 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:06.497 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:06.497 BaseBdev2 00:34:06.497 BaseBdev3 00:34:06.497 BaseBdev4' 00:34:06.497 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:06.497 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:06.497 19:01:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:06.755 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:06.755 "name": "NewBaseBdev", 00:34:06.755 "aliases": [ 00:34:06.755 "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4" 00:34:06.755 ], 00:34:06.755 "product_name": "Malloc disk", 00:34:06.755 "block_size": 512, 00:34:06.755 "num_blocks": 65536, 00:34:06.755 "uuid": "dbb25a09-e025-4d55-b1ee-07efa0cf0ed4", 00:34:06.755 "assigned_rate_limits": { 00:34:06.755 "rw_ios_per_sec": 0, 00:34:06.755 "rw_mbytes_per_sec": 0, 00:34:06.755 "r_mbytes_per_sec": 0, 00:34:06.755 "w_mbytes_per_sec": 0 00:34:06.755 }, 00:34:06.755 "claimed": true, 00:34:06.755 "claim_type": "exclusive_write", 00:34:06.755 "zoned": false, 00:34:06.755 "supported_io_types": { 00:34:06.755 "read": true, 00:34:06.755 "write": true, 00:34:06.755 "unmap": true, 00:34:06.755 "flush": true, 00:34:06.755 "reset": true, 00:34:06.756 "nvme_admin": false, 00:34:06.756 "nvme_io": false, 00:34:06.756 "nvme_io_md": false, 00:34:06.756 "write_zeroes": true, 00:34:06.756 "zcopy": true, 00:34:06.756 "get_zone_info": false, 00:34:06.756 "zone_management": false, 00:34:06.756 "zone_append": false, 00:34:06.756 "compare": false, 00:34:06.756 "compare_and_write": false, 00:34:06.756 "abort": true, 00:34:06.756 "seek_hole": false, 00:34:06.756 "seek_data": false, 00:34:06.756 "copy": true, 00:34:06.756 "nvme_iov_md": false 00:34:06.756 }, 00:34:06.756 "memory_domains": [ 00:34:06.756 { 00:34:06.756 "dma_device_id": "system", 00:34:06.756 "dma_device_type": 1 00:34:06.756 }, 00:34:06.756 { 00:34:06.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:06.756 "dma_device_type": 2 00:34:06.756 } 00:34:06.756 ], 00:34:06.756 "driver_specific": {} 00:34:06.756 }' 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:06.756 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:07.015 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:07.015 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:07.015 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:07.015 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:07.015 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:07.274 "name": "BaseBdev2", 00:34:07.274 "aliases": [ 00:34:07.274 "37962110-6d02-4c16-bd66-beb820ee0201" 00:34:07.274 ], 00:34:07.274 "product_name": "Malloc disk", 00:34:07.274 "block_size": 512, 00:34:07.274 "num_blocks": 65536, 00:34:07.274 "uuid": "37962110-6d02-4c16-bd66-beb820ee0201", 00:34:07.274 "assigned_rate_limits": { 00:34:07.274 "rw_ios_per_sec": 0, 00:34:07.274 "rw_mbytes_per_sec": 0, 00:34:07.274 "r_mbytes_per_sec": 0, 00:34:07.274 "w_mbytes_per_sec": 0 00:34:07.274 }, 00:34:07.274 "claimed": true, 00:34:07.274 "claim_type": "exclusive_write", 00:34:07.274 "zoned": false, 00:34:07.274 "supported_io_types": { 00:34:07.274 "read": true, 00:34:07.274 "write": true, 00:34:07.274 "unmap": true, 00:34:07.274 "flush": true, 00:34:07.274 "reset": true, 00:34:07.274 "nvme_admin": false, 00:34:07.274 "nvme_io": false, 00:34:07.274 "nvme_io_md": false, 00:34:07.274 "write_zeroes": true, 00:34:07.274 "zcopy": true, 00:34:07.274 "get_zone_info": false, 00:34:07.274 "zone_management": false, 00:34:07.274 "zone_append": false, 00:34:07.274 "compare": false, 00:34:07.274 "compare_and_write": false, 00:34:07.274 "abort": true, 00:34:07.274 "seek_hole": false, 00:34:07.274 "seek_data": false, 00:34:07.274 "copy": true, 00:34:07.274 "nvme_iov_md": false 00:34:07.274 }, 00:34:07.274 "memory_domains": [ 00:34:07.274 { 00:34:07.274 "dma_device_id": "system", 00:34:07.274 "dma_device_type": 1 00:34:07.274 }, 00:34:07.274 { 00:34:07.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:07.274 "dma_device_type": 2 00:34:07.274 } 00:34:07.274 ], 00:34:07.274 "driver_specific": {} 00:34:07.274 }' 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:07.274 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:07.534 19:01:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:07.792 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:07.792 "name": "BaseBdev3", 00:34:07.792 "aliases": [ 00:34:07.792 "76651aa2-01e7-4138-89e3-5801f27ed46e" 00:34:07.792 ], 00:34:07.792 "product_name": "Malloc disk", 00:34:07.792 "block_size": 512, 00:34:07.792 "num_blocks": 65536, 00:34:07.792 "uuid": "76651aa2-01e7-4138-89e3-5801f27ed46e", 00:34:07.792 "assigned_rate_limits": { 00:34:07.792 "rw_ios_per_sec": 0, 00:34:07.792 "rw_mbytes_per_sec": 0, 00:34:07.792 "r_mbytes_per_sec": 0, 00:34:07.792 "w_mbytes_per_sec": 0 00:34:07.792 }, 00:34:07.792 "claimed": true, 00:34:07.792 "claim_type": "exclusive_write", 00:34:07.792 "zoned": false, 00:34:07.792 "supported_io_types": { 00:34:07.792 "read": true, 00:34:07.792 "write": true, 00:34:07.792 "unmap": true, 00:34:07.792 "flush": true, 00:34:07.792 "reset": true, 00:34:07.792 "nvme_admin": false, 00:34:07.792 "nvme_io": false, 00:34:07.792 "nvme_io_md": false, 00:34:07.792 "write_zeroes": true, 00:34:07.792 "zcopy": true, 00:34:07.792 "get_zone_info": false, 00:34:07.792 "zone_management": false, 00:34:07.792 "zone_append": false, 00:34:07.792 "compare": false, 00:34:07.792 "compare_and_write": false, 00:34:07.792 "abort": true, 00:34:07.792 "seek_hole": false, 00:34:07.792 "seek_data": false, 00:34:07.792 "copy": true, 00:34:07.792 "nvme_iov_md": false 00:34:07.792 }, 00:34:07.792 "memory_domains": [ 00:34:07.792 { 00:34:07.792 "dma_device_id": "system", 00:34:07.792 "dma_device_type": 1 00:34:07.792 }, 00:34:07.792 { 00:34:07.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:07.792 "dma_device_type": 2 00:34:07.792 } 00:34:07.792 ], 00:34:07.793 "driver_specific": {} 00:34:07.793 }' 00:34:07.793 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:07.793 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:07.793 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:07.793 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:08.051 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:08.618 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:08.618 "name": "BaseBdev4", 00:34:08.618 "aliases": [ 00:34:08.618 "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a" 00:34:08.618 ], 00:34:08.618 "product_name": "Malloc disk", 00:34:08.618 "block_size": 512, 00:34:08.618 "num_blocks": 65536, 00:34:08.618 "uuid": "ee1d6311-7a2e-4f2a-89e5-e27db5a4cf0a", 00:34:08.618 "assigned_rate_limits": { 00:34:08.618 "rw_ios_per_sec": 0, 00:34:08.618 "rw_mbytes_per_sec": 0, 00:34:08.618 "r_mbytes_per_sec": 0, 00:34:08.618 "w_mbytes_per_sec": 0 00:34:08.618 }, 00:34:08.618 "claimed": true, 00:34:08.618 "claim_type": "exclusive_write", 00:34:08.618 "zoned": false, 00:34:08.618 "supported_io_types": { 00:34:08.618 "read": true, 00:34:08.618 "write": true, 00:34:08.618 "unmap": true, 00:34:08.618 "flush": true, 00:34:08.618 "reset": true, 00:34:08.618 "nvme_admin": false, 00:34:08.618 "nvme_io": false, 00:34:08.618 "nvme_io_md": false, 00:34:08.618 "write_zeroes": true, 00:34:08.618 "zcopy": true, 00:34:08.618 "get_zone_info": false, 00:34:08.618 "zone_management": false, 00:34:08.618 "zone_append": false, 00:34:08.618 "compare": false, 00:34:08.618 "compare_and_write": false, 00:34:08.618 "abort": true, 00:34:08.618 "seek_hole": false, 00:34:08.618 "seek_data": false, 00:34:08.618 "copy": true, 00:34:08.618 "nvme_iov_md": false 00:34:08.618 }, 00:34:08.618 "memory_domains": [ 00:34:08.618 { 00:34:08.618 "dma_device_id": "system", 00:34:08.618 "dma_device_type": 1 00:34:08.618 }, 00:34:08.618 { 00:34:08.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:08.618 "dma_device_type": 2 00:34:08.618 } 00:34:08.618 ], 00:34:08.618 "driver_specific": {} 00:34:08.618 }' 00:34:08.618 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:08.618 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:08.618 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:08.618 19:01:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:08.618 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:08.618 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:08.618 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:08.618 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:08.618 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:08.618 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:08.876 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:08.876 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:08.876 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:09.134 [2024-07-25 19:01:09.518401] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:09.134 [2024-07-25 19:01:09.518534] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:09.134 [2024-07-25 19:01:09.518755] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:09.134 [2024-07-25 19:01:09.519069] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:09.134 [2024-07-25 19:01:09.519150] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name Existed_Raid, state offline 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 154317 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 154317 ']' 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 154317 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 154317 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 154317' 00:34:09.134 killing process with pid 154317 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 154317 00:34:09.134 [2024-07-25 19:01:09.573096] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:09.134 19:01:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 154317 00:34:09.393 [2024-07-25 19:01:09.867636] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:10.770 ************************************ 00:34:10.770 END TEST raid5f_state_function_test_sb 00:34:10.770 ************************************ 00:34:10.770 19:01:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:34:10.770 00:34:10.770 real 0m31.476s 00:34:10.770 user 0m56.519s 00:34:10.770 sys 0m5.325s 00:34:10.770 19:01:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:10.770 19:01:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:10.770 19:01:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:34:10.770 19:01:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:10.771 19:01:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:10.771 19:01:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:10.771 ************************************ 00:34:10.771 START TEST raid5f_superblock_test 00:34:10.771 ************************************ 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:34:10.771 19:01:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=155381 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 155381 /var/tmp/spdk-raid.sock 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 155381 ']' 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:10.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:10.771 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:10.771 [2024-07-25 19:01:11.079989] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:10.771 [2024-07-25 19:01:11.080207] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155381 ] 00:34:10.771 [2024-07-25 19:01:11.264933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.030 [2024-07-25 19:01:11.559599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.289 [2024-07-25 19:01:11.755053] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:11.549 19:01:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:34:11.549 malloc1 00:34:11.549 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:11.808 [2024-07-25 19:01:12.294345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:11.808 [2024-07-25 19:01:12.294454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.808 [2024-07-25 19:01:12.294492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:34:11.808 [2024-07-25 19:01:12.294520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.808 [2024-07-25 19:01:12.297110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.808 [2024-07-25 19:01:12.297160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:11.808 pt1 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:11.808 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:34:12.067 malloc2 00:34:12.067 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:12.326 [2024-07-25 19:01:12.766604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:12.326 [2024-07-25 19:01:12.766700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:12.326 [2024-07-25 19:01:12.766732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:34:12.326 [2024-07-25 19:01:12.766751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:12.326 [2024-07-25 19:01:12.769205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:12.326 [2024-07-25 19:01:12.769253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:12.326 pt2 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:12.326 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:34:12.584 malloc3 00:34:12.584 19:01:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:12.843 [2024-07-25 19:01:13.188777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:12.843 [2024-07-25 19:01:13.188879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:12.843 [2024-07-25 19:01:13.188922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:12.843 [2024-07-25 19:01:13.188965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:12.843 [2024-07-25 19:01:13.191215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:12.843 [2024-07-25 19:01:13.191281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:12.843 pt3 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:34:12.843 malloc4 00:34:12.843 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:13.100 [2024-07-25 19:01:13.550754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:13.100 [2024-07-25 19:01:13.550871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:13.100 [2024-07-25 19:01:13.550903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:13.100 [2024-07-25 19:01:13.550927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:13.100 [2024-07-25 19:01:13.553125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:13.100 [2024-07-25 19:01:13.553185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:13.100 pt4 00:34:13.100 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:34:13.100 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:34:13.100 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:34:13.357 [2024-07-25 19:01:13.758821] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:13.357 [2024-07-25 19:01:13.760669] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:13.357 [2024-07-25 19:01:13.760730] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:13.357 [2024-07-25 19:01:13.760793] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:13.357 [2024-07-25 19:01:13.760985] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:34:13.357 [2024-07-25 19:01:13.760995] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:13.357 [2024-07-25 19:01:13.761119] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:13.357 [2024-07-25 19:01:13.766167] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:34:13.357 [2024-07-25 19:01:13.766189] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:34:13.357 [2024-07-25 19:01:13.766362] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.357 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.615 19:01:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:13.615 "name": "raid_bdev1", 00:34:13.615 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:13.615 "strip_size_kb": 64, 00:34:13.615 "state": "online", 00:34:13.615 "raid_level": "raid5f", 00:34:13.615 "superblock": true, 00:34:13.615 "num_base_bdevs": 4, 00:34:13.615 "num_base_bdevs_discovered": 4, 00:34:13.615 "num_base_bdevs_operational": 4, 00:34:13.615 "base_bdevs_list": [ 00:34:13.615 { 00:34:13.615 "name": "pt1", 00:34:13.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:13.615 "is_configured": true, 00:34:13.615 "data_offset": 2048, 00:34:13.615 "data_size": 63488 00:34:13.615 }, 00:34:13.615 { 00:34:13.615 "name": "pt2", 00:34:13.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:13.615 "is_configured": true, 00:34:13.615 "data_offset": 2048, 00:34:13.615 "data_size": 63488 00:34:13.615 }, 00:34:13.615 { 00:34:13.615 "name": "pt3", 00:34:13.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:13.615 "is_configured": true, 00:34:13.615 "data_offset": 2048, 00:34:13.615 "data_size": 63488 00:34:13.615 }, 00:34:13.615 { 00:34:13.615 "name": "pt4", 00:34:13.615 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:13.615 "is_configured": true, 00:34:13.615 "data_offset": 2048, 00:34:13.615 "data_size": 63488 00:34:13.615 } 00:34:13.615 ] 00:34:13.615 }' 00:34:13.615 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:13.615 19:01:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:14.180 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:14.438 [2024-07-25 19:01:14.857651] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.438 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:14.438 "name": "raid_bdev1", 00:34:14.438 "aliases": [ 00:34:14.438 "e27a589d-5380-46e9-8097-97dea101df4d" 00:34:14.438 ], 00:34:14.438 "product_name": "Raid Volume", 00:34:14.438 "block_size": 512, 00:34:14.438 "num_blocks": 190464, 00:34:14.438 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:14.438 "assigned_rate_limits": { 00:34:14.438 "rw_ios_per_sec": 0, 00:34:14.438 "rw_mbytes_per_sec": 0, 00:34:14.438 "r_mbytes_per_sec": 0, 00:34:14.438 "w_mbytes_per_sec": 0 00:34:14.438 }, 00:34:14.438 "claimed": false, 00:34:14.438 "zoned": false, 00:34:14.438 "supported_io_types": { 00:34:14.438 "read": true, 00:34:14.438 "write": true, 00:34:14.438 "unmap": false, 00:34:14.438 "flush": false, 00:34:14.438 "reset": true, 00:34:14.438 "nvme_admin": false, 00:34:14.438 "nvme_io": false, 00:34:14.438 "nvme_io_md": false, 00:34:14.438 "write_zeroes": true, 00:34:14.438 "zcopy": false, 00:34:14.438 "get_zone_info": false, 00:34:14.438 "zone_management": false, 00:34:14.438 "zone_append": false, 00:34:14.438 "compare": false, 00:34:14.438 "compare_and_write": false, 00:34:14.438 "abort": false, 00:34:14.438 "seek_hole": false, 00:34:14.438 "seek_data": false, 00:34:14.438 "copy": false, 00:34:14.438 "nvme_iov_md": false 00:34:14.438 }, 00:34:14.438 "driver_specific": { 00:34:14.438 "raid": { 00:34:14.438 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:14.438 "strip_size_kb": 64, 00:34:14.438 "state": "online", 00:34:14.438 "raid_level": "raid5f", 00:34:14.438 "superblock": true, 00:34:14.438 "num_base_bdevs": 4, 00:34:14.438 "num_base_bdevs_discovered": 4, 00:34:14.438 "num_base_bdevs_operational": 4, 00:34:14.438 "base_bdevs_list": [ 00:34:14.438 { 00:34:14.438 "name": "pt1", 00:34:14.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:14.438 "is_configured": true, 00:34:14.438 "data_offset": 2048, 00:34:14.438 "data_size": 63488 00:34:14.438 }, 00:34:14.438 { 00:34:14.438 "name": "pt2", 00:34:14.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:14.438 "is_configured": true, 00:34:14.438 "data_offset": 2048, 00:34:14.438 "data_size": 63488 00:34:14.438 }, 00:34:14.438 { 00:34:14.439 "name": "pt3", 00:34:14.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:14.439 "is_configured": true, 00:34:14.439 "data_offset": 2048, 00:34:14.439 "data_size": 63488 00:34:14.439 }, 00:34:14.439 { 00:34:14.439 "name": "pt4", 00:34:14.439 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:14.439 "is_configured": true, 00:34:14.439 "data_offset": 2048, 00:34:14.439 "data_size": 63488 00:34:14.439 } 00:34:14.439 ] 00:34:14.439 } 00:34:14.439 } 00:34:14.439 }' 00:34:14.439 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:14.439 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:14.439 pt2 00:34:14.439 pt3 00:34:14.439 pt4' 00:34:14.439 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:14.439 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:14.439 19:01:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:14.696 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:14.696 "name": "pt1", 00:34:14.696 "aliases": [ 00:34:14.696 "00000000-0000-0000-0000-000000000001" 00:34:14.696 ], 00:34:14.696 "product_name": "passthru", 00:34:14.696 "block_size": 512, 00:34:14.696 "num_blocks": 65536, 00:34:14.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:14.696 "assigned_rate_limits": { 00:34:14.696 "rw_ios_per_sec": 0, 00:34:14.696 "rw_mbytes_per_sec": 0, 00:34:14.696 "r_mbytes_per_sec": 0, 00:34:14.696 "w_mbytes_per_sec": 0 00:34:14.696 }, 00:34:14.696 "claimed": true, 00:34:14.696 "claim_type": "exclusive_write", 00:34:14.696 "zoned": false, 00:34:14.696 "supported_io_types": { 00:34:14.696 "read": true, 00:34:14.696 "write": true, 00:34:14.696 "unmap": true, 00:34:14.696 "flush": true, 00:34:14.696 "reset": true, 00:34:14.696 "nvme_admin": false, 00:34:14.696 "nvme_io": false, 00:34:14.696 "nvme_io_md": false, 00:34:14.696 "write_zeroes": true, 00:34:14.696 "zcopy": true, 00:34:14.696 "get_zone_info": false, 00:34:14.696 "zone_management": false, 00:34:14.696 "zone_append": false, 00:34:14.696 "compare": false, 00:34:14.696 "compare_and_write": false, 00:34:14.696 "abort": true, 00:34:14.696 "seek_hole": false, 00:34:14.696 "seek_data": false, 00:34:14.696 "copy": true, 00:34:14.696 "nvme_iov_md": false 00:34:14.696 }, 00:34:14.696 "memory_domains": [ 00:34:14.696 { 00:34:14.696 "dma_device_id": "system", 00:34:14.696 "dma_device_type": 1 00:34:14.696 }, 00:34:14.696 { 00:34:14.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:14.696 "dma_device_type": 2 00:34:14.696 } 00:34:14.696 ], 00:34:14.696 "driver_specific": { 00:34:14.696 "passthru": { 00:34:14.696 "name": "pt1", 00:34:14.696 "base_bdev_name": "malloc1" 00:34:14.696 } 00:34:14.696 } 00:34:14.696 }' 00:34:14.696 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:14.696 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:14.696 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:14.696 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:14.696 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:14.955 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:15.213 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:15.213 "name": "pt2", 00:34:15.213 "aliases": [ 00:34:15.213 "00000000-0000-0000-0000-000000000002" 00:34:15.213 ], 00:34:15.213 "product_name": "passthru", 00:34:15.213 "block_size": 512, 00:34:15.213 "num_blocks": 65536, 00:34:15.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:15.213 "assigned_rate_limits": { 00:34:15.213 "rw_ios_per_sec": 0, 00:34:15.213 "rw_mbytes_per_sec": 0, 00:34:15.213 "r_mbytes_per_sec": 0, 00:34:15.213 "w_mbytes_per_sec": 0 00:34:15.213 }, 00:34:15.213 "claimed": true, 00:34:15.213 "claim_type": "exclusive_write", 00:34:15.213 "zoned": false, 00:34:15.213 "supported_io_types": { 00:34:15.213 "read": true, 00:34:15.213 "write": true, 00:34:15.213 "unmap": true, 00:34:15.213 "flush": true, 00:34:15.213 "reset": true, 00:34:15.213 "nvme_admin": false, 00:34:15.213 "nvme_io": false, 00:34:15.213 "nvme_io_md": false, 00:34:15.213 "write_zeroes": true, 00:34:15.213 "zcopy": true, 00:34:15.213 "get_zone_info": false, 00:34:15.213 "zone_management": false, 00:34:15.213 "zone_append": false, 00:34:15.213 "compare": false, 00:34:15.213 "compare_and_write": false, 00:34:15.213 "abort": true, 00:34:15.213 "seek_hole": false, 00:34:15.213 "seek_data": false, 00:34:15.213 "copy": true, 00:34:15.213 "nvme_iov_md": false 00:34:15.213 }, 00:34:15.213 "memory_domains": [ 00:34:15.213 { 00:34:15.213 "dma_device_id": "system", 00:34:15.213 "dma_device_type": 1 00:34:15.213 }, 00:34:15.213 { 00:34:15.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:15.213 "dma_device_type": 2 00:34:15.213 } 00:34:15.213 ], 00:34:15.213 "driver_specific": { 00:34:15.213 "passthru": { 00:34:15.213 "name": "pt2", 00:34:15.213 "base_bdev_name": "malloc2" 00:34:15.213 } 00:34:15.213 } 00:34:15.213 }' 00:34:15.213 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.213 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.213 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:15.213 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.471 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.471 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:15.471 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.471 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.471 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:15.471 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:15.471 19:01:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:15.471 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:15.471 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:15.471 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:34:15.471 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:15.729 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:15.729 "name": "pt3", 00:34:15.729 "aliases": [ 00:34:15.729 "00000000-0000-0000-0000-000000000003" 00:34:15.729 ], 00:34:15.729 "product_name": "passthru", 00:34:15.729 "block_size": 512, 00:34:15.729 "num_blocks": 65536, 00:34:15.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:15.729 "assigned_rate_limits": { 00:34:15.729 "rw_ios_per_sec": 0, 00:34:15.729 "rw_mbytes_per_sec": 0, 00:34:15.729 "r_mbytes_per_sec": 0, 00:34:15.729 "w_mbytes_per_sec": 0 00:34:15.729 }, 00:34:15.729 "claimed": true, 00:34:15.729 "claim_type": "exclusive_write", 00:34:15.729 "zoned": false, 00:34:15.729 "supported_io_types": { 00:34:15.729 "read": true, 00:34:15.729 "write": true, 00:34:15.729 "unmap": true, 00:34:15.729 "flush": true, 00:34:15.729 "reset": true, 00:34:15.729 "nvme_admin": false, 00:34:15.729 "nvme_io": false, 00:34:15.729 "nvme_io_md": false, 00:34:15.729 "write_zeroes": true, 00:34:15.729 "zcopy": true, 00:34:15.729 "get_zone_info": false, 00:34:15.729 "zone_management": false, 00:34:15.729 "zone_append": false, 00:34:15.729 "compare": false, 00:34:15.729 "compare_and_write": false, 00:34:15.729 "abort": true, 00:34:15.729 "seek_hole": false, 00:34:15.729 "seek_data": false, 00:34:15.729 "copy": true, 00:34:15.729 "nvme_iov_md": false 00:34:15.729 }, 00:34:15.729 "memory_domains": [ 00:34:15.729 { 00:34:15.729 "dma_device_id": "system", 00:34:15.729 "dma_device_type": 1 00:34:15.729 }, 00:34:15.729 { 00:34:15.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:15.729 "dma_device_type": 2 00:34:15.729 } 00:34:15.729 ], 00:34:15.729 "driver_specific": { 00:34:15.729 "passthru": { 00:34:15.729 "name": "pt3", 00:34:15.729 "base_bdev_name": "malloc3" 00:34:15.729 } 00:34:15.729 } 00:34:15.729 }' 00:34:15.729 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:15.986 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.243 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.243 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:16.243 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:16.243 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:34:16.243 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:16.500 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:16.500 "name": "pt4", 00:34:16.500 "aliases": [ 00:34:16.500 "00000000-0000-0000-0000-000000000004" 00:34:16.500 ], 00:34:16.500 "product_name": "passthru", 00:34:16.500 "block_size": 512, 00:34:16.500 "num_blocks": 65536, 00:34:16.500 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:16.500 "assigned_rate_limits": { 00:34:16.500 "rw_ios_per_sec": 0, 00:34:16.500 "rw_mbytes_per_sec": 0, 00:34:16.500 "r_mbytes_per_sec": 0, 00:34:16.500 "w_mbytes_per_sec": 0 00:34:16.500 }, 00:34:16.500 "claimed": true, 00:34:16.500 "claim_type": "exclusive_write", 00:34:16.500 "zoned": false, 00:34:16.500 "supported_io_types": { 00:34:16.500 "read": true, 00:34:16.500 "write": true, 00:34:16.500 "unmap": true, 00:34:16.500 "flush": true, 00:34:16.500 "reset": true, 00:34:16.500 "nvme_admin": false, 00:34:16.500 "nvme_io": false, 00:34:16.500 "nvme_io_md": false, 00:34:16.500 "write_zeroes": true, 00:34:16.500 "zcopy": true, 00:34:16.500 "get_zone_info": false, 00:34:16.500 "zone_management": false, 00:34:16.500 "zone_append": false, 00:34:16.500 "compare": false, 00:34:16.500 "compare_and_write": false, 00:34:16.500 "abort": true, 00:34:16.500 "seek_hole": false, 00:34:16.500 "seek_data": false, 00:34:16.500 "copy": true, 00:34:16.500 "nvme_iov_md": false 00:34:16.500 }, 00:34:16.500 "memory_domains": [ 00:34:16.500 { 00:34:16.500 "dma_device_id": "system", 00:34:16.500 "dma_device_type": 1 00:34:16.500 }, 00:34:16.500 { 00:34:16.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.500 "dma_device_type": 2 00:34:16.500 } 00:34:16.500 ], 00:34:16.500 "driver_specific": { 00:34:16.500 "passthru": { 00:34:16.500 "name": "pt4", 00:34:16.500 "base_bdev_name": "malloc4" 00:34:16.500 } 00:34:16.500 } 00:34:16.500 }' 00:34:16.500 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:16.500 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:16.500 19:01:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:16.500 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:16.500 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:16.757 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:16.757 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.757 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.757 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:16.757 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.757 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.757 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:16.758 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:16.758 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:34:17.015 [2024-07-25 19:01:17.518162] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:17.015 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=e27a589d-5380-46e9-8097-97dea101df4d 00:34:17.015 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z e27a589d-5380-46e9-8097-97dea101df4d ']' 00:34:17.015 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:17.273 [2024-07-25 19:01:17.778061] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:17.273 [2024-07-25 19:01:17.778089] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:17.273 [2024-07-25 19:01:17.778175] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:17.273 [2024-07-25 19:01:17.778250] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:17.273 [2024-07-25 19:01:17.778259] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:34:17.273 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.273 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:34:17.531 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:34:17.531 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:34:17.531 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:34:17.531 19:01:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:17.789 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:34:17.789 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:17.789 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:34:17.789 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:18.047 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:34:18.047 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:18.305 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:18.305 19:01:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:18.563 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:18.821 [2024-07-25 19:01:19.218292] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:18.821 [2024-07-25 19:01:19.220206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:18.821 [2024-07-25 19:01:19.220263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:18.821 [2024-07-25 19:01:19.220290] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:34:18.821 [2024-07-25 19:01:19.220331] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:18.821 [2024-07-25 19:01:19.220404] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:18.821 [2024-07-25 19:01:19.220432] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:18.821 [2024-07-25 19:01:19.220492] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:34:18.821 [2024-07-25 19:01:19.220515] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:18.821 [2024-07-25 19:01:19.220524] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:34:18.821 request: 00:34:18.821 { 00:34:18.821 "name": "raid_bdev1", 00:34:18.821 "raid_level": "raid5f", 00:34:18.821 "base_bdevs": [ 00:34:18.821 "malloc1", 00:34:18.821 "malloc2", 00:34:18.821 "malloc3", 00:34:18.821 "malloc4" 00:34:18.821 ], 00:34:18.821 "strip_size_kb": 64, 00:34:18.821 "superblock": false, 00:34:18.821 "method": "bdev_raid_create", 00:34:18.821 "req_id": 1 00:34:18.821 } 00:34:18.821 Got JSON-RPC error response 00:34:18.821 response: 00:34:18.821 { 00:34:18.821 "code": -17, 00:34:18.821 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:18.821 } 00:34:18.821 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:34:18.821 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:18.821 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:18.821 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:18.821 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.821 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:19.079 [2024-07-25 19:01:19.574331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:19.079 [2024-07-25 19:01:19.574402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:19.079 [2024-07-25 19:01:19.574427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:19.079 [2024-07-25 19:01:19.574466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:19.079 [2024-07-25 19:01:19.576693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:19.079 [2024-07-25 19:01:19.576734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:19.079 [2024-07-25 19:01:19.576847] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:19.079 [2024-07-25 19:01:19.576891] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:19.079 pt1 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.079 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.337 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:19.337 "name": "raid_bdev1", 00:34:19.337 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:19.337 "strip_size_kb": 64, 00:34:19.337 "state": "configuring", 00:34:19.337 "raid_level": "raid5f", 00:34:19.337 "superblock": true, 00:34:19.337 "num_base_bdevs": 4, 00:34:19.337 "num_base_bdevs_discovered": 1, 00:34:19.337 "num_base_bdevs_operational": 4, 00:34:19.337 "base_bdevs_list": [ 00:34:19.337 { 00:34:19.337 "name": "pt1", 00:34:19.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:19.337 "is_configured": true, 00:34:19.337 "data_offset": 2048, 00:34:19.337 "data_size": 63488 00:34:19.337 }, 00:34:19.337 { 00:34:19.337 "name": null, 00:34:19.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:19.337 "is_configured": false, 00:34:19.337 "data_offset": 2048, 00:34:19.337 "data_size": 63488 00:34:19.337 }, 00:34:19.337 { 00:34:19.337 "name": null, 00:34:19.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:19.337 "is_configured": false, 00:34:19.337 "data_offset": 2048, 00:34:19.337 "data_size": 63488 00:34:19.337 }, 00:34:19.337 { 00:34:19.337 "name": null, 00:34:19.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:19.337 "is_configured": false, 00:34:19.337 "data_offset": 2048, 00:34:19.337 "data_size": 63488 00:34:19.337 } 00:34:19.337 ] 00:34:19.337 }' 00:34:19.337 19:01:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:19.337 19:01:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.903 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:34:19.903 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:20.161 [2024-07-25 19:01:20.526545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:20.161 [2024-07-25 19:01:20.526621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:20.161 [2024-07-25 19:01:20.526664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:20.161 [2024-07-25 19:01:20.526697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:20.161 [2024-07-25 19:01:20.527127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:20.161 [2024-07-25 19:01:20.527151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:20.161 [2024-07-25 19:01:20.527258] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:20.161 [2024-07-25 19:01:20.527278] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:20.161 pt2 00:34:20.161 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:20.419 [2024-07-25 19:01:20.786592] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:20.419 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:34:20.419 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:20.419 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:20.419 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:20.419 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:20.420 "name": "raid_bdev1", 00:34:20.420 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:20.420 "strip_size_kb": 64, 00:34:20.420 "state": "configuring", 00:34:20.420 "raid_level": "raid5f", 00:34:20.420 "superblock": true, 00:34:20.420 "num_base_bdevs": 4, 00:34:20.420 "num_base_bdevs_discovered": 1, 00:34:20.420 "num_base_bdevs_operational": 4, 00:34:20.420 "base_bdevs_list": [ 00:34:20.420 { 00:34:20.420 "name": "pt1", 00:34:20.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:20.420 "is_configured": true, 00:34:20.420 "data_offset": 2048, 00:34:20.420 "data_size": 63488 00:34:20.420 }, 00:34:20.420 { 00:34:20.420 "name": null, 00:34:20.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:20.420 "is_configured": false, 00:34:20.420 "data_offset": 2048, 00:34:20.420 "data_size": 63488 00:34:20.420 }, 00:34:20.420 { 00:34:20.420 "name": null, 00:34:20.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:20.420 "is_configured": false, 00:34:20.420 "data_offset": 2048, 00:34:20.420 "data_size": 63488 00:34:20.420 }, 00:34:20.420 { 00:34:20.420 "name": null, 00:34:20.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:20.420 "is_configured": false, 00:34:20.420 "data_offset": 2048, 00:34:20.420 "data_size": 63488 00:34:20.420 } 00:34:20.420 ] 00:34:20.420 }' 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:20.420 19:01:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.372 19:01:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:34:21.373 19:01:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:34:21.373 19:01:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:21.373 [2024-07-25 19:01:21.734769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:21.373 [2024-07-25 19:01:21.734836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:21.373 [2024-07-25 19:01:21.734867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:21.373 [2024-07-25 19:01:21.734911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:21.373 [2024-07-25 19:01:21.735307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:21.373 [2024-07-25 19:01:21.735345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:21.373 [2024-07-25 19:01:21.735427] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:21.373 [2024-07-25 19:01:21.735446] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:21.373 pt2 00:34:21.373 19:01:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:34:21.373 19:01:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:34:21.373 19:01:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:21.660 [2024-07-25 19:01:22.002825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:21.660 [2024-07-25 19:01:22.002886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:21.660 [2024-07-25 19:01:22.002910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:34:21.660 [2024-07-25 19:01:22.002955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:21.660 [2024-07-25 19:01:22.003325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:21.660 [2024-07-25 19:01:22.003365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:21.660 [2024-07-25 19:01:22.003448] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:21.660 [2024-07-25 19:01:22.003469] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:21.660 pt3 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:21.660 [2024-07-25 19:01:22.182830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:21.660 [2024-07-25 19:01:22.182883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:21.660 [2024-07-25 19:01:22.182907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:21.660 [2024-07-25 19:01:22.182955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:21.660 [2024-07-25 19:01:22.183300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:21.660 [2024-07-25 19:01:22.183346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:21.660 [2024-07-25 19:01:22.183442] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:34:21.660 [2024-07-25 19:01:22.183483] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:21.660 [2024-07-25 19:01:22.183599] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:34:21.660 [2024-07-25 19:01:22.183608] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:21.660 [2024-07-25 19:01:22.183698] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:21.660 [2024-07-25 19:01:22.188517] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:34:21.660 [2024-07-25 19:01:22.188540] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:34:21.660 [2024-07-25 19:01:22.188690] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:21.660 pt4 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.660 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.931 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:21.931 "name": "raid_bdev1", 00:34:21.931 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:21.931 "strip_size_kb": 64, 00:34:21.931 "state": "online", 00:34:21.931 "raid_level": "raid5f", 00:34:21.931 "superblock": true, 00:34:21.931 "num_base_bdevs": 4, 00:34:21.931 "num_base_bdevs_discovered": 4, 00:34:21.931 "num_base_bdevs_operational": 4, 00:34:21.931 "base_bdevs_list": [ 00:34:21.931 { 00:34:21.931 "name": "pt1", 00:34:21.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:21.931 "is_configured": true, 00:34:21.931 "data_offset": 2048, 00:34:21.931 "data_size": 63488 00:34:21.931 }, 00:34:21.931 { 00:34:21.931 "name": "pt2", 00:34:21.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:21.931 "is_configured": true, 00:34:21.931 "data_offset": 2048, 00:34:21.931 "data_size": 63488 00:34:21.931 }, 00:34:21.931 { 00:34:21.931 "name": "pt3", 00:34:21.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:21.931 "is_configured": true, 00:34:21.931 "data_offset": 2048, 00:34:21.931 "data_size": 63488 00:34:21.931 }, 00:34:21.931 { 00:34:21.931 "name": "pt4", 00:34:21.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:21.931 "is_configured": true, 00:34:21.932 "data_offset": 2048, 00:34:21.932 "data_size": 63488 00:34:21.932 } 00:34:21.932 ] 00:34:21.932 }' 00:34:21.932 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:21.932 19:01:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:22.498 19:01:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:22.756 [2024-07-25 19:01:23.224270] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:22.756 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:22.756 "name": "raid_bdev1", 00:34:22.756 "aliases": [ 00:34:22.756 "e27a589d-5380-46e9-8097-97dea101df4d" 00:34:22.756 ], 00:34:22.756 "product_name": "Raid Volume", 00:34:22.756 "block_size": 512, 00:34:22.756 "num_blocks": 190464, 00:34:22.756 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:22.756 "assigned_rate_limits": { 00:34:22.756 "rw_ios_per_sec": 0, 00:34:22.756 "rw_mbytes_per_sec": 0, 00:34:22.756 "r_mbytes_per_sec": 0, 00:34:22.756 "w_mbytes_per_sec": 0 00:34:22.756 }, 00:34:22.756 "claimed": false, 00:34:22.756 "zoned": false, 00:34:22.756 "supported_io_types": { 00:34:22.756 "read": true, 00:34:22.756 "write": true, 00:34:22.756 "unmap": false, 00:34:22.756 "flush": false, 00:34:22.756 "reset": true, 00:34:22.756 "nvme_admin": false, 00:34:22.756 "nvme_io": false, 00:34:22.756 "nvme_io_md": false, 00:34:22.756 "write_zeroes": true, 00:34:22.756 "zcopy": false, 00:34:22.756 "get_zone_info": false, 00:34:22.756 "zone_management": false, 00:34:22.756 "zone_append": false, 00:34:22.756 "compare": false, 00:34:22.756 "compare_and_write": false, 00:34:22.756 "abort": false, 00:34:22.756 "seek_hole": false, 00:34:22.756 "seek_data": false, 00:34:22.756 "copy": false, 00:34:22.756 "nvme_iov_md": false 00:34:22.756 }, 00:34:22.756 "driver_specific": { 00:34:22.756 "raid": { 00:34:22.756 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:22.756 "strip_size_kb": 64, 00:34:22.756 "state": "online", 00:34:22.756 "raid_level": "raid5f", 00:34:22.756 "superblock": true, 00:34:22.756 "num_base_bdevs": 4, 00:34:22.756 "num_base_bdevs_discovered": 4, 00:34:22.756 "num_base_bdevs_operational": 4, 00:34:22.756 "base_bdevs_list": [ 00:34:22.756 { 00:34:22.756 "name": "pt1", 00:34:22.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:22.756 "is_configured": true, 00:34:22.756 "data_offset": 2048, 00:34:22.756 "data_size": 63488 00:34:22.756 }, 00:34:22.756 { 00:34:22.756 "name": "pt2", 00:34:22.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:22.756 "is_configured": true, 00:34:22.756 "data_offset": 2048, 00:34:22.756 "data_size": 63488 00:34:22.756 }, 00:34:22.756 { 00:34:22.756 "name": "pt3", 00:34:22.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:22.756 "is_configured": true, 00:34:22.756 "data_offset": 2048, 00:34:22.756 "data_size": 63488 00:34:22.756 }, 00:34:22.756 { 00:34:22.756 "name": "pt4", 00:34:22.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:22.756 "is_configured": true, 00:34:22.756 "data_offset": 2048, 00:34:22.756 "data_size": 63488 00:34:22.756 } 00:34:22.756 ] 00:34:22.756 } 00:34:22.756 } 00:34:22.756 }' 00:34:22.756 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:22.756 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:22.756 pt2 00:34:22.756 pt3 00:34:22.756 pt4' 00:34:22.756 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:22.756 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:22.756 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:23.013 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:23.013 "name": "pt1", 00:34:23.013 "aliases": [ 00:34:23.013 "00000000-0000-0000-0000-000000000001" 00:34:23.013 ], 00:34:23.013 "product_name": "passthru", 00:34:23.013 "block_size": 512, 00:34:23.013 "num_blocks": 65536, 00:34:23.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:23.013 "assigned_rate_limits": { 00:34:23.013 "rw_ios_per_sec": 0, 00:34:23.013 "rw_mbytes_per_sec": 0, 00:34:23.013 "r_mbytes_per_sec": 0, 00:34:23.013 "w_mbytes_per_sec": 0 00:34:23.013 }, 00:34:23.013 "claimed": true, 00:34:23.013 "claim_type": "exclusive_write", 00:34:23.013 "zoned": false, 00:34:23.013 "supported_io_types": { 00:34:23.013 "read": true, 00:34:23.013 "write": true, 00:34:23.013 "unmap": true, 00:34:23.013 "flush": true, 00:34:23.013 "reset": true, 00:34:23.013 "nvme_admin": false, 00:34:23.013 "nvme_io": false, 00:34:23.013 "nvme_io_md": false, 00:34:23.013 "write_zeroes": true, 00:34:23.013 "zcopy": true, 00:34:23.013 "get_zone_info": false, 00:34:23.013 "zone_management": false, 00:34:23.013 "zone_append": false, 00:34:23.013 "compare": false, 00:34:23.013 "compare_and_write": false, 00:34:23.013 "abort": true, 00:34:23.013 "seek_hole": false, 00:34:23.013 "seek_data": false, 00:34:23.013 "copy": true, 00:34:23.013 "nvme_iov_md": false 00:34:23.013 }, 00:34:23.013 "memory_domains": [ 00:34:23.013 { 00:34:23.013 "dma_device_id": "system", 00:34:23.013 "dma_device_type": 1 00:34:23.013 }, 00:34:23.013 { 00:34:23.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:23.013 "dma_device_type": 2 00:34:23.013 } 00:34:23.013 ], 00:34:23.013 "driver_specific": { 00:34:23.013 "passthru": { 00:34:23.013 "name": "pt1", 00:34:23.013 "base_bdev_name": "malloc1" 00:34:23.013 } 00:34:23.013 } 00:34:23.013 }' 00:34:23.013 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:23.013 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:23.013 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:23.013 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:23.270 19:01:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:23.528 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:23.528 "name": "pt2", 00:34:23.528 "aliases": [ 00:34:23.528 "00000000-0000-0000-0000-000000000002" 00:34:23.528 ], 00:34:23.528 "product_name": "passthru", 00:34:23.528 "block_size": 512, 00:34:23.528 "num_blocks": 65536, 00:34:23.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:23.528 "assigned_rate_limits": { 00:34:23.528 "rw_ios_per_sec": 0, 00:34:23.528 "rw_mbytes_per_sec": 0, 00:34:23.528 "r_mbytes_per_sec": 0, 00:34:23.528 "w_mbytes_per_sec": 0 00:34:23.528 }, 00:34:23.528 "claimed": true, 00:34:23.528 "claim_type": "exclusive_write", 00:34:23.528 "zoned": false, 00:34:23.528 "supported_io_types": { 00:34:23.528 "read": true, 00:34:23.528 "write": true, 00:34:23.528 "unmap": true, 00:34:23.528 "flush": true, 00:34:23.528 "reset": true, 00:34:23.528 "nvme_admin": false, 00:34:23.528 "nvme_io": false, 00:34:23.528 "nvme_io_md": false, 00:34:23.528 "write_zeroes": true, 00:34:23.528 "zcopy": true, 00:34:23.528 "get_zone_info": false, 00:34:23.528 "zone_management": false, 00:34:23.528 "zone_append": false, 00:34:23.528 "compare": false, 00:34:23.528 "compare_and_write": false, 00:34:23.528 "abort": true, 00:34:23.528 "seek_hole": false, 00:34:23.528 "seek_data": false, 00:34:23.528 "copy": true, 00:34:23.528 "nvme_iov_md": false 00:34:23.528 }, 00:34:23.528 "memory_domains": [ 00:34:23.528 { 00:34:23.528 "dma_device_id": "system", 00:34:23.528 "dma_device_type": 1 00:34:23.528 }, 00:34:23.528 { 00:34:23.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:23.528 "dma_device_type": 2 00:34:23.528 } 00:34:23.528 ], 00:34:23.528 "driver_specific": { 00:34:23.528 "passthru": { 00:34:23.528 "name": "pt2", 00:34:23.528 "base_bdev_name": "malloc2" 00:34:23.528 } 00:34:23.528 } 00:34:23.528 }' 00:34:23.528 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:23.787 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:23.787 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:23.787 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:23.787 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:23.787 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:23.787 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:23.787 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:24.045 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:24.045 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:24.045 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:24.045 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:24.045 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:24.045 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:34:24.045 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:24.304 "name": "pt3", 00:34:24.304 "aliases": [ 00:34:24.304 "00000000-0000-0000-0000-000000000003" 00:34:24.304 ], 00:34:24.304 "product_name": "passthru", 00:34:24.304 "block_size": 512, 00:34:24.304 "num_blocks": 65536, 00:34:24.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:24.304 "assigned_rate_limits": { 00:34:24.304 "rw_ios_per_sec": 0, 00:34:24.304 "rw_mbytes_per_sec": 0, 00:34:24.304 "r_mbytes_per_sec": 0, 00:34:24.304 "w_mbytes_per_sec": 0 00:34:24.304 }, 00:34:24.304 "claimed": true, 00:34:24.304 "claim_type": "exclusive_write", 00:34:24.304 "zoned": false, 00:34:24.304 "supported_io_types": { 00:34:24.304 "read": true, 00:34:24.304 "write": true, 00:34:24.304 "unmap": true, 00:34:24.304 "flush": true, 00:34:24.304 "reset": true, 00:34:24.304 "nvme_admin": false, 00:34:24.304 "nvme_io": false, 00:34:24.304 "nvme_io_md": false, 00:34:24.304 "write_zeroes": true, 00:34:24.304 "zcopy": true, 00:34:24.304 "get_zone_info": false, 00:34:24.304 "zone_management": false, 00:34:24.304 "zone_append": false, 00:34:24.304 "compare": false, 00:34:24.304 "compare_and_write": false, 00:34:24.304 "abort": true, 00:34:24.304 "seek_hole": false, 00:34:24.304 "seek_data": false, 00:34:24.304 "copy": true, 00:34:24.304 "nvme_iov_md": false 00:34:24.304 }, 00:34:24.304 "memory_domains": [ 00:34:24.304 { 00:34:24.304 "dma_device_id": "system", 00:34:24.304 "dma_device_type": 1 00:34:24.304 }, 00:34:24.304 { 00:34:24.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.304 "dma_device_type": 2 00:34:24.304 } 00:34:24.304 ], 00:34:24.304 "driver_specific": { 00:34:24.304 "passthru": { 00:34:24.304 "name": "pt3", 00:34:24.304 "base_bdev_name": "malloc3" 00:34:24.304 } 00:34:24.304 } 00:34:24.304 }' 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:24.304 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:24.562 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:24.562 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:24.562 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:24.562 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:24.562 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:24.562 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:34:24.562 19:01:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:24.821 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:24.821 "name": "pt4", 00:34:24.821 "aliases": [ 00:34:24.821 "00000000-0000-0000-0000-000000000004" 00:34:24.821 ], 00:34:24.821 "product_name": "passthru", 00:34:24.821 "block_size": 512, 00:34:24.821 "num_blocks": 65536, 00:34:24.821 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:24.821 "assigned_rate_limits": { 00:34:24.821 "rw_ios_per_sec": 0, 00:34:24.821 "rw_mbytes_per_sec": 0, 00:34:24.821 "r_mbytes_per_sec": 0, 00:34:24.821 "w_mbytes_per_sec": 0 00:34:24.821 }, 00:34:24.821 "claimed": true, 00:34:24.821 "claim_type": "exclusive_write", 00:34:24.821 "zoned": false, 00:34:24.821 "supported_io_types": { 00:34:24.821 "read": true, 00:34:24.821 "write": true, 00:34:24.821 "unmap": true, 00:34:24.821 "flush": true, 00:34:24.821 "reset": true, 00:34:24.821 "nvme_admin": false, 00:34:24.821 "nvme_io": false, 00:34:24.821 "nvme_io_md": false, 00:34:24.821 "write_zeroes": true, 00:34:24.821 "zcopy": true, 00:34:24.821 "get_zone_info": false, 00:34:24.821 "zone_management": false, 00:34:24.821 "zone_append": false, 00:34:24.821 "compare": false, 00:34:24.821 "compare_and_write": false, 00:34:24.821 "abort": true, 00:34:24.821 "seek_hole": false, 00:34:24.821 "seek_data": false, 00:34:24.821 "copy": true, 00:34:24.821 "nvme_iov_md": false 00:34:24.821 }, 00:34:24.821 "memory_domains": [ 00:34:24.821 { 00:34:24.821 "dma_device_id": "system", 00:34:24.821 "dma_device_type": 1 00:34:24.821 }, 00:34:24.821 { 00:34:24.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.821 "dma_device_type": 2 00:34:24.821 } 00:34:24.821 ], 00:34:24.821 "driver_specific": { 00:34:24.821 "passthru": { 00:34:24.821 "name": "pt4", 00:34:24.821 "base_bdev_name": "malloc4" 00:34:24.821 } 00:34:24.821 } 00:34:24.821 }' 00:34:24.821 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:24.821 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:24.821 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:24.822 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:25.081 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:34:25.340 [2024-07-25 19:01:25.878526] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:25.340 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' e27a589d-5380-46e9-8097-97dea101df4d '!=' e27a589d-5380-46e9-8097-97dea101df4d ']' 00:34:25.340 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:34:25.340 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:25.340 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:34:25.340 19:01:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:25.599 [2024-07-25 19:01:26.042416] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.599 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.859 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:25.859 "name": "raid_bdev1", 00:34:25.859 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:25.859 "strip_size_kb": 64, 00:34:25.859 "state": "online", 00:34:25.859 "raid_level": "raid5f", 00:34:25.859 "superblock": true, 00:34:25.859 "num_base_bdevs": 4, 00:34:25.859 "num_base_bdevs_discovered": 3, 00:34:25.859 "num_base_bdevs_operational": 3, 00:34:25.859 "base_bdevs_list": [ 00:34:25.859 { 00:34:25.859 "name": null, 00:34:25.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.859 "is_configured": false, 00:34:25.859 "data_offset": 2048, 00:34:25.859 "data_size": 63488 00:34:25.859 }, 00:34:25.859 { 00:34:25.859 "name": "pt2", 00:34:25.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:25.859 "is_configured": true, 00:34:25.859 "data_offset": 2048, 00:34:25.859 "data_size": 63488 00:34:25.859 }, 00:34:25.859 { 00:34:25.859 "name": "pt3", 00:34:25.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:25.859 "is_configured": true, 00:34:25.859 "data_offset": 2048, 00:34:25.859 "data_size": 63488 00:34:25.859 }, 00:34:25.859 { 00:34:25.859 "name": "pt4", 00:34:25.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:25.859 "is_configured": true, 00:34:25.859 "data_offset": 2048, 00:34:25.859 "data_size": 63488 00:34:25.859 } 00:34:25.859 ] 00:34:25.859 }' 00:34:25.859 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:25.859 19:01:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:26.428 19:01:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:26.687 [2024-07-25 19:01:27.130386] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:26.687 [2024-07-25 19:01:27.130419] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:26.687 [2024-07-25 19:01:27.130494] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:26.687 [2024-07-25 19:01:27.130571] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:26.687 [2024-07-25 19:01:27.130580] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:34:26.687 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:34:26.687 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:34:26.946 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:27.205 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:34:27.205 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:34:27.205 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:27.464 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:34:27.464 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:34:27.464 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:34:27.464 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:34:27.464 19:01:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:27.724 [2024-07-25 19:01:28.150233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:27.724 [2024-07-25 19:01:28.150378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:27.724 [2024-07-25 19:01:28.150416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:27.724 [2024-07-25 19:01:28.150460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:27.724 [2024-07-25 19:01:28.153165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:27.724 [2024-07-25 19:01:28.153231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:27.724 [2024-07-25 19:01:28.153367] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:27.724 [2024-07-25 19:01:28.153427] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:27.724 pt2 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.724 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.984 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:27.984 "name": "raid_bdev1", 00:34:27.984 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:27.984 "strip_size_kb": 64, 00:34:27.984 "state": "configuring", 00:34:27.984 "raid_level": "raid5f", 00:34:27.984 "superblock": true, 00:34:27.984 "num_base_bdevs": 4, 00:34:27.984 "num_base_bdevs_discovered": 1, 00:34:27.984 "num_base_bdevs_operational": 3, 00:34:27.984 "base_bdevs_list": [ 00:34:27.984 { 00:34:27.984 "name": null, 00:34:27.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.984 "is_configured": false, 00:34:27.984 "data_offset": 2048, 00:34:27.984 "data_size": 63488 00:34:27.984 }, 00:34:27.984 { 00:34:27.984 "name": "pt2", 00:34:27.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:27.984 "is_configured": true, 00:34:27.984 "data_offset": 2048, 00:34:27.984 "data_size": 63488 00:34:27.984 }, 00:34:27.984 { 00:34:27.984 "name": null, 00:34:27.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:27.984 "is_configured": false, 00:34:27.984 "data_offset": 2048, 00:34:27.984 "data_size": 63488 00:34:27.984 }, 00:34:27.984 { 00:34:27.984 "name": null, 00:34:27.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:27.984 "is_configured": false, 00:34:27.984 "data_offset": 2048, 00:34:27.984 "data_size": 63488 00:34:27.984 } 00:34:27.984 ] 00:34:27.984 }' 00:34:27.984 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:27.984 19:01:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:28.552 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:34:28.552 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:34:28.552 19:01:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:28.552 [2024-07-25 19:01:29.086381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:28.552 [2024-07-25 19:01:29.086478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:28.552 [2024-07-25 19:01:29.086529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:28.552 [2024-07-25 19:01:29.086578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:28.552 [2024-07-25 19:01:29.087095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:28.552 [2024-07-25 19:01:29.087122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:28.552 [2024-07-25 19:01:29.087232] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:28.552 [2024-07-25 19:01:29.087254] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:28.552 pt3 00:34:28.552 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:28.552 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:28.552 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:28.552 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:28.552 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:28.553 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:28.553 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:28.553 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:28.553 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:28.553 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:28.553 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.553 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.812 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:28.812 "name": "raid_bdev1", 00:34:28.812 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:28.812 "strip_size_kb": 64, 00:34:28.812 "state": "configuring", 00:34:28.812 "raid_level": "raid5f", 00:34:28.812 "superblock": true, 00:34:28.812 "num_base_bdevs": 4, 00:34:28.812 "num_base_bdevs_discovered": 2, 00:34:28.812 "num_base_bdevs_operational": 3, 00:34:28.812 "base_bdevs_list": [ 00:34:28.812 { 00:34:28.812 "name": null, 00:34:28.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:28.812 "is_configured": false, 00:34:28.812 "data_offset": 2048, 00:34:28.812 "data_size": 63488 00:34:28.812 }, 00:34:28.812 { 00:34:28.812 "name": "pt2", 00:34:28.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:28.812 "is_configured": true, 00:34:28.812 "data_offset": 2048, 00:34:28.812 "data_size": 63488 00:34:28.812 }, 00:34:28.812 { 00:34:28.812 "name": "pt3", 00:34:28.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:28.812 "is_configured": true, 00:34:28.812 "data_offset": 2048, 00:34:28.812 "data_size": 63488 00:34:28.812 }, 00:34:28.812 { 00:34:28.812 "name": null, 00:34:28.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:28.812 "is_configured": false, 00:34:28.812 "data_offset": 2048, 00:34:28.812 "data_size": 63488 00:34:28.812 } 00:34:28.812 ] 00:34:28.812 }' 00:34:28.812 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:28.812 19:01:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:29.380 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:34:29.380 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:34:29.380 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:34:29.380 19:01:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:29.640 [2024-07-25 19:01:30.078572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:29.640 [2024-07-25 19:01:30.078663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:29.640 [2024-07-25 19:01:30.078712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:34:29.640 [2024-07-25 19:01:30.078736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:29.640 [2024-07-25 19:01:30.079235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:29.640 [2024-07-25 19:01:30.079264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:29.640 [2024-07-25 19:01:30.079371] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:34:29.640 [2024-07-25 19:01:30.079393] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:29.640 [2024-07-25 19:01:30.079526] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:34:29.640 [2024-07-25 19:01:30.079535] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:29.640 [2024-07-25 19:01:30.079610] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:29.640 [2024-07-25 19:01:30.084460] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:34:29.640 [2024-07-25 19:01:30.084484] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:34:29.640 [2024-07-25 19:01:30.084759] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:29.640 pt4 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:29.640 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:29.641 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:29.641 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.641 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.900 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:29.900 "name": "raid_bdev1", 00:34:29.900 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:29.900 "strip_size_kb": 64, 00:34:29.900 "state": "online", 00:34:29.900 "raid_level": "raid5f", 00:34:29.900 "superblock": true, 00:34:29.900 "num_base_bdevs": 4, 00:34:29.900 "num_base_bdevs_discovered": 3, 00:34:29.900 "num_base_bdevs_operational": 3, 00:34:29.900 "base_bdevs_list": [ 00:34:29.900 { 00:34:29.900 "name": null, 00:34:29.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.900 "is_configured": false, 00:34:29.900 "data_offset": 2048, 00:34:29.900 "data_size": 63488 00:34:29.900 }, 00:34:29.900 { 00:34:29.900 "name": "pt2", 00:34:29.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:29.900 "is_configured": true, 00:34:29.900 "data_offset": 2048, 00:34:29.900 "data_size": 63488 00:34:29.901 }, 00:34:29.901 { 00:34:29.901 "name": "pt3", 00:34:29.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:29.901 "is_configured": true, 00:34:29.901 "data_offset": 2048, 00:34:29.901 "data_size": 63488 00:34:29.901 }, 00:34:29.901 { 00:34:29.901 "name": "pt4", 00:34:29.901 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:29.901 "is_configured": true, 00:34:29.901 "data_offset": 2048, 00:34:29.901 "data_size": 63488 00:34:29.901 } 00:34:29.901 ] 00:34:29.901 }' 00:34:29.901 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:29.901 19:01:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.469 19:01:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:30.729 [2024-07-25 19:01:31.137527] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:30.729 [2024-07-25 19:01:31.137558] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:30.729 [2024-07-25 19:01:31.137655] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:30.729 [2024-07-25 19:01:31.137735] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:30.729 [2024-07-25 19:01:31.137744] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:34:30.729 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.729 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:34:30.988 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:34:30.988 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:34:30.988 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:34:30.988 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:34:30.988 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:30.988 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:31.247 [2024-07-25 19:01:31.670114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:31.247 [2024-07-25 19:01:31.670245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:31.247 [2024-07-25 19:01:31.670301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:34:31.247 [2024-07-25 19:01:31.670370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:31.247 [2024-07-25 19:01:31.674223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:31.247 [2024-07-25 19:01:31.674298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:31.247 [2024-07-25 19:01:31.674463] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:31.247 [2024-07-25 19:01:31.674527] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:31.247 [2024-07-25 19:01:31.674778] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:31.247 [2024-07-25 19:01:31.674804] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:31.247 [2024-07-25 19:01:31.674840] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:34:31.247 [2024-07-25 19:01:31.674939] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:31.247 [2024-07-25 19:01:31.675073] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:31.247 pt1 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:31.247 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:31.248 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.248 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.507 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:31.507 "name": "raid_bdev1", 00:34:31.507 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:31.507 "strip_size_kb": 64, 00:34:31.507 "state": "configuring", 00:34:31.507 "raid_level": "raid5f", 00:34:31.507 "superblock": true, 00:34:31.507 "num_base_bdevs": 4, 00:34:31.507 "num_base_bdevs_discovered": 2, 00:34:31.507 "num_base_bdevs_operational": 3, 00:34:31.507 "base_bdevs_list": [ 00:34:31.507 { 00:34:31.507 "name": null, 00:34:31.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:31.507 "is_configured": false, 00:34:31.507 "data_offset": 2048, 00:34:31.507 "data_size": 63488 00:34:31.507 }, 00:34:31.507 { 00:34:31.507 "name": "pt2", 00:34:31.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:31.507 "is_configured": true, 00:34:31.507 "data_offset": 2048, 00:34:31.507 "data_size": 63488 00:34:31.507 }, 00:34:31.507 { 00:34:31.507 "name": "pt3", 00:34:31.507 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:31.507 "is_configured": true, 00:34:31.507 "data_offset": 2048, 00:34:31.507 "data_size": 63488 00:34:31.507 }, 00:34:31.507 { 00:34:31.507 "name": null, 00:34:31.507 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:31.507 "is_configured": false, 00:34:31.507 "data_offset": 2048, 00:34:31.507 "data_size": 63488 00:34:31.507 } 00:34:31.507 ] 00:34:31.507 }' 00:34:31.507 19:01:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:31.507 19:01:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:32.075 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:34:32.075 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:32.075 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:34:32.075 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:32.334 [2024-07-25 19:01:32.674666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:32.334 [2024-07-25 19:01:32.674761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:32.334 [2024-07-25 19:01:32.674798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:34:32.334 [2024-07-25 19:01:32.674850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:32.334 [2024-07-25 19:01:32.675346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:32.334 [2024-07-25 19:01:32.675387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:32.334 [2024-07-25 19:01:32.675506] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:34:32.334 [2024-07-25 19:01:32.675539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:32.334 [2024-07-25 19:01:32.675676] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:34:32.334 [2024-07-25 19:01:32.675691] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:32.334 [2024-07-25 19:01:32.675774] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:34:32.334 [2024-07-25 19:01:32.680734] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:34:32.334 [2024-07-25 19:01:32.680758] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:34:32.334 [2024-07-25 19:01:32.680988] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:32.334 pt4 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.334 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.594 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:32.594 "name": "raid_bdev1", 00:34:32.594 "uuid": "e27a589d-5380-46e9-8097-97dea101df4d", 00:34:32.594 "strip_size_kb": 64, 00:34:32.594 "state": "online", 00:34:32.594 "raid_level": "raid5f", 00:34:32.594 "superblock": true, 00:34:32.594 "num_base_bdevs": 4, 00:34:32.594 "num_base_bdevs_discovered": 3, 00:34:32.594 "num_base_bdevs_operational": 3, 00:34:32.594 "base_bdevs_list": [ 00:34:32.594 { 00:34:32.594 "name": null, 00:34:32.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.594 "is_configured": false, 00:34:32.594 "data_offset": 2048, 00:34:32.594 "data_size": 63488 00:34:32.594 }, 00:34:32.594 { 00:34:32.594 "name": "pt2", 00:34:32.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:32.594 "is_configured": true, 00:34:32.594 "data_offset": 2048, 00:34:32.594 "data_size": 63488 00:34:32.594 }, 00:34:32.594 { 00:34:32.594 "name": "pt3", 00:34:32.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:32.594 "is_configured": true, 00:34:32.595 "data_offset": 2048, 00:34:32.595 "data_size": 63488 00:34:32.595 }, 00:34:32.595 { 00:34:32.595 "name": "pt4", 00:34:32.595 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:32.595 "is_configured": true, 00:34:32.595 "data_offset": 2048, 00:34:32.595 "data_size": 63488 00:34:32.595 } 00:34:32.595 ] 00:34:32.595 }' 00:34:32.595 19:01:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:32.595 19:01:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:33.162 19:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:34:33.162 19:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:33.420 19:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:34:33.420 19:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:34:33.420 19:01:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:33.679 [2024-07-25 19:01:34.077651] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' e27a589d-5380-46e9-8097-97dea101df4d '!=' e27a589d-5380-46e9-8097-97dea101df4d ']' 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 155381 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 155381 ']' 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 155381 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 155381 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:33.679 killing process with pid 155381 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 155381' 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 155381 00:34:33.679 [2024-07-25 19:01:34.129230] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:33.679 [2024-07-25 19:01:34.129306] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:33.679 19:01:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 155381 00:34:33.679 [2024-07-25 19:01:34.129386] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:33.679 [2024-07-25 19:01:34.129395] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:34:33.938 [2024-07-25 19:01:34.469135] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:35.317 19:01:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:34:35.317 00:34:35.317 real 0m24.668s 00:34:35.317 user 0m44.038s 00:34:35.317 sys 0m4.250s 00:34:35.317 19:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:35.317 19:01:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 ************************************ 00:34:35.317 END TEST raid5f_superblock_test 00:34:35.317 ************************************ 00:34:35.317 19:01:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # '[' true = true ']' 00:34:35.317 19:01:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:34:35.317 19:01:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:35.317 19:01:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:35.317 19:01:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 ************************************ 00:34:35.317 START TEST raid5f_rebuild_test 00:34:35.317 ************************************ 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # echo BaseBdev4 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=156211 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 156211 /var/tmp/spdk-raid.sock 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 156211 ']' 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:35.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:35.317 19:01:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.317 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:35.317 Zero copy mechanism will not be used. 00:34:35.317 [2024-07-25 19:01:35.845872] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:35.317 [2024-07-25 19:01:35.846030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156211 ] 00:34:35.577 [2024-07-25 19:01:36.008447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.836 [2024-07-25 19:01:36.256582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.095 [2024-07-25 19:01:36.530551] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:36.355 19:01:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:36.355 19:01:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:34:36.355 19:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:34:36.355 19:01:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:36.614 BaseBdev1_malloc 00:34:36.614 19:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:36.874 [2024-07-25 19:01:37.280223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:36.874 [2024-07-25 19:01:37.280326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.874 [2024-07-25 19:01:37.280361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:34:36.874 [2024-07-25 19:01:37.280382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.874 [2024-07-25 19:01:37.283067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.874 [2024-07-25 19:01:37.283122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:36.874 BaseBdev1 00:34:36.874 19:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:34:36.874 19:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:37.133 BaseBdev2_malloc 00:34:37.133 19:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:37.392 [2024-07-25 19:01:37.777023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:37.392 [2024-07-25 19:01:37.777160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.392 [2024-07-25 19:01:37.777201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:34:37.392 [2024-07-25 19:01:37.777224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.392 [2024-07-25 19:01:37.779913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.392 [2024-07-25 19:01:37.779959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:37.392 BaseBdev2 00:34:37.392 19:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:34:37.392 19:01:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:37.652 BaseBdev3_malloc 00:34:37.652 19:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:37.652 [2024-07-25 19:01:38.168663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:37.652 [2024-07-25 19:01:38.168763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.652 [2024-07-25 19:01:38.168801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:37.652 [2024-07-25 19:01:38.168829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.652 [2024-07-25 19:01:38.171428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.652 [2024-07-25 19:01:38.171478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:37.652 BaseBdev3 00:34:37.652 19:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:34:37.652 19:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:34:37.911 BaseBdev4_malloc 00:34:37.911 19:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:34:38.169 [2024-07-25 19:01:38.582905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:34:38.169 [2024-07-25 19:01:38.583046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.169 [2024-07-25 19:01:38.583088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:38.169 [2024-07-25 19:01:38.583117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.169 [2024-07-25 19:01:38.585760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.169 [2024-07-25 19:01:38.585824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:34:38.169 BaseBdev4 00:34:38.169 19:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:34:38.428 spare_malloc 00:34:38.428 19:01:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:38.688 spare_delay 00:34:38.688 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:38.688 [2024-07-25 19:01:39.266580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:38.688 [2024-07-25 19:01:39.266685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.688 [2024-07-25 19:01:39.266721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:38.688 [2024-07-25 19:01:39.266756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.947 [2024-07-25 19:01:39.269536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.947 [2024-07-25 19:01:39.269591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:38.947 spare 00:34:38.947 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:34:39.206 [2024-07-25 19:01:39.542677] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:39.206 [2024-07-25 19:01:39.544549] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:39.206 [2024-07-25 19:01:39.544621] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:39.206 [2024-07-25 19:01:39.544662] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:39.206 [2024-07-25 19:01:39.544743] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:34:39.206 [2024-07-25 19:01:39.544751] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:39.206 [2024-07-25 19:01:39.544898] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:39.206 [2024-07-25 19:01:39.553282] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:34:39.206 [2024-07-25 19:01:39.553305] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:34:39.206 [2024-07-25 19:01:39.553508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:39.206 "name": "raid_bdev1", 00:34:39.206 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:39.206 "strip_size_kb": 64, 00:34:39.206 "state": "online", 00:34:39.206 "raid_level": "raid5f", 00:34:39.206 "superblock": false, 00:34:39.206 "num_base_bdevs": 4, 00:34:39.206 "num_base_bdevs_discovered": 4, 00:34:39.206 "num_base_bdevs_operational": 4, 00:34:39.206 "base_bdevs_list": [ 00:34:39.206 { 00:34:39.206 "name": "BaseBdev1", 00:34:39.206 "uuid": "70c56521-f266-5fcf-bee9-0e7fd80860fb", 00:34:39.206 "is_configured": true, 00:34:39.206 "data_offset": 0, 00:34:39.206 "data_size": 65536 00:34:39.206 }, 00:34:39.206 { 00:34:39.206 "name": "BaseBdev2", 00:34:39.206 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:39.206 "is_configured": true, 00:34:39.206 "data_offset": 0, 00:34:39.206 "data_size": 65536 00:34:39.206 }, 00:34:39.206 { 00:34:39.206 "name": "BaseBdev3", 00:34:39.206 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:39.206 "is_configured": true, 00:34:39.206 "data_offset": 0, 00:34:39.206 "data_size": 65536 00:34:39.206 }, 00:34:39.206 { 00:34:39.206 "name": "BaseBdev4", 00:34:39.206 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:39.206 "is_configured": true, 00:34:39.206 "data_offset": 0, 00:34:39.206 "data_size": 65536 00:34:39.206 } 00:34:39.206 ] 00:34:39.206 }' 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:39.206 19:01:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.775 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:34:39.775 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:40.034 [2024-07-25 19:01:40.595198] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:40.291 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=196608 00:34:40.291 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.291 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:40.548 19:01:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:40.548 [2024-07-25 19:01:41.103232] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:40.805 /dev/nbd0 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:40.805 1+0 records in 00:34:40.805 1+0 records out 00:34:40.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320854 s, 12.8 MB/s 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 192 00:34:40.805 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:34:41.370 512+0 records in 00:34:41.370 512+0 records out 00:34:41.370 100663296 bytes (101 MB, 96 MiB) copied, 0.557978 s, 180 MB/s 00:34:41.370 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:41.370 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:41.370 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:41.370 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:41.370 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:41.370 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:41.370 19:01:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:41.628 [2024-07-25 19:01:42.037204] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:41.628 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:41.628 [2024-07-25 19:01:42.199383] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.885 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.143 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:42.143 "name": "raid_bdev1", 00:34:42.143 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:42.143 "strip_size_kb": 64, 00:34:42.143 "state": "online", 00:34:42.143 "raid_level": "raid5f", 00:34:42.143 "superblock": false, 00:34:42.143 "num_base_bdevs": 4, 00:34:42.143 "num_base_bdevs_discovered": 3, 00:34:42.143 "num_base_bdevs_operational": 3, 00:34:42.143 "base_bdevs_list": [ 00:34:42.143 { 00:34:42.143 "name": null, 00:34:42.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:42.143 "is_configured": false, 00:34:42.143 "data_offset": 0, 00:34:42.143 "data_size": 65536 00:34:42.143 }, 00:34:42.143 { 00:34:42.143 "name": "BaseBdev2", 00:34:42.143 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:42.143 "is_configured": true, 00:34:42.143 "data_offset": 0, 00:34:42.143 "data_size": 65536 00:34:42.143 }, 00:34:42.143 { 00:34:42.143 "name": "BaseBdev3", 00:34:42.143 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:42.143 "is_configured": true, 00:34:42.143 "data_offset": 0, 00:34:42.143 "data_size": 65536 00:34:42.143 }, 00:34:42.143 { 00:34:42.143 "name": "BaseBdev4", 00:34:42.143 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:42.143 "is_configured": true, 00:34:42.143 "data_offset": 0, 00:34:42.143 "data_size": 65536 00:34:42.143 } 00:34:42.143 ] 00:34:42.143 }' 00:34:42.143 19:01:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:42.143 19:01:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.723 19:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:42.723 [2024-07-25 19:01:43.204080] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:42.723 [2024-07-25 19:01:43.222366] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:34:42.723 [2024-07-25 19:01:43.233030] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:42.723 19:01:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:43.708 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:43.708 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:43.708 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:43.708 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:43.708 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:43.708 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.708 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.967 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:43.967 "name": "raid_bdev1", 00:34:43.967 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:43.967 "strip_size_kb": 64, 00:34:43.967 "state": "online", 00:34:43.967 "raid_level": "raid5f", 00:34:43.967 "superblock": false, 00:34:43.967 "num_base_bdevs": 4, 00:34:43.967 "num_base_bdevs_discovered": 4, 00:34:43.967 "num_base_bdevs_operational": 4, 00:34:43.967 "process": { 00:34:43.967 "type": "rebuild", 00:34:43.967 "target": "spare", 00:34:43.967 "progress": { 00:34:43.967 "blocks": 23040, 00:34:43.967 "percent": 11 00:34:43.967 } 00:34:43.967 }, 00:34:43.967 "base_bdevs_list": [ 00:34:43.967 { 00:34:43.967 "name": "spare", 00:34:43.967 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:43.967 "is_configured": true, 00:34:43.967 "data_offset": 0, 00:34:43.967 "data_size": 65536 00:34:43.967 }, 00:34:43.967 { 00:34:43.967 "name": "BaseBdev2", 00:34:43.967 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:43.967 "is_configured": true, 00:34:43.967 "data_offset": 0, 00:34:43.967 "data_size": 65536 00:34:43.967 }, 00:34:43.967 { 00:34:43.967 "name": "BaseBdev3", 00:34:43.967 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:43.967 "is_configured": true, 00:34:43.967 "data_offset": 0, 00:34:43.967 "data_size": 65536 00:34:43.967 }, 00:34:43.967 { 00:34:43.967 "name": "BaseBdev4", 00:34:43.967 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:43.967 "is_configured": true, 00:34:43.967 "data_offset": 0, 00:34:43.967 "data_size": 65536 00:34:43.967 } 00:34:43.967 ] 00:34:43.967 }' 00:34:43.967 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:43.967 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:43.967 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:44.225 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:44.225 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:44.225 [2024-07-25 19:01:44.802197] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:44.483 [2024-07-25 19:01:44.845039] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:44.483 [2024-07-25 19:01:44.845150] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:44.483 [2024-07-25 19:01:44.845169] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:44.483 [2024-07-25 19:01:44.845177] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:44.483 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:44.483 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:44.483 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.484 19:01:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:44.742 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:44.742 "name": "raid_bdev1", 00:34:44.742 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:44.742 "strip_size_kb": 64, 00:34:44.742 "state": "online", 00:34:44.742 "raid_level": "raid5f", 00:34:44.742 "superblock": false, 00:34:44.742 "num_base_bdevs": 4, 00:34:44.742 "num_base_bdevs_discovered": 3, 00:34:44.742 "num_base_bdevs_operational": 3, 00:34:44.742 "base_bdevs_list": [ 00:34:44.742 { 00:34:44.742 "name": null, 00:34:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:44.742 "is_configured": false, 00:34:44.742 "data_offset": 0, 00:34:44.742 "data_size": 65536 00:34:44.742 }, 00:34:44.742 { 00:34:44.742 "name": "BaseBdev2", 00:34:44.742 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:44.742 "is_configured": true, 00:34:44.742 "data_offset": 0, 00:34:44.742 "data_size": 65536 00:34:44.742 }, 00:34:44.742 { 00:34:44.742 "name": "BaseBdev3", 00:34:44.742 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:44.742 "is_configured": true, 00:34:44.742 "data_offset": 0, 00:34:44.742 "data_size": 65536 00:34:44.742 }, 00:34:44.742 { 00:34:44.742 "name": "BaseBdev4", 00:34:44.742 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:44.742 "is_configured": true, 00:34:44.742 "data_offset": 0, 00:34:44.742 "data_size": 65536 00:34:44.742 } 00:34:44.742 ] 00:34:44.742 }' 00:34:44.742 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:44.742 19:01:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:45.308 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:45.308 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:45.308 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:45.308 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:45.308 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:45.308 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.308 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.566 19:01:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:45.566 "name": "raid_bdev1", 00:34:45.566 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:45.566 "strip_size_kb": 64, 00:34:45.566 "state": "online", 00:34:45.566 "raid_level": "raid5f", 00:34:45.566 "superblock": false, 00:34:45.566 "num_base_bdevs": 4, 00:34:45.566 "num_base_bdevs_discovered": 3, 00:34:45.566 "num_base_bdevs_operational": 3, 00:34:45.566 "base_bdevs_list": [ 00:34:45.566 { 00:34:45.566 "name": null, 00:34:45.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.566 "is_configured": false, 00:34:45.566 "data_offset": 0, 00:34:45.566 "data_size": 65536 00:34:45.566 }, 00:34:45.566 { 00:34:45.566 "name": "BaseBdev2", 00:34:45.566 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:45.566 "is_configured": true, 00:34:45.566 "data_offset": 0, 00:34:45.566 "data_size": 65536 00:34:45.566 }, 00:34:45.566 { 00:34:45.566 "name": "BaseBdev3", 00:34:45.566 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:45.566 "is_configured": true, 00:34:45.566 "data_offset": 0, 00:34:45.566 "data_size": 65536 00:34:45.566 }, 00:34:45.566 { 00:34:45.566 "name": "BaseBdev4", 00:34:45.566 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:45.566 "is_configured": true, 00:34:45.566 "data_offset": 0, 00:34:45.566 "data_size": 65536 00:34:45.566 } 00:34:45.566 ] 00:34:45.566 }' 00:34:45.566 19:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:45.566 19:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:45.566 19:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:45.566 19:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:45.566 19:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:45.825 [2024-07-25 19:01:46.347969] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:45.825 [2024-07-25 19:01:46.364321] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:34:45.825 [2024-07-25 19:01:46.374712] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:45.825 19:01:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:47.202 "name": "raid_bdev1", 00:34:47.202 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:47.202 "strip_size_kb": 64, 00:34:47.202 "state": "online", 00:34:47.202 "raid_level": "raid5f", 00:34:47.202 "superblock": false, 00:34:47.202 "num_base_bdevs": 4, 00:34:47.202 "num_base_bdevs_discovered": 4, 00:34:47.202 "num_base_bdevs_operational": 4, 00:34:47.202 "process": { 00:34:47.202 "type": "rebuild", 00:34:47.202 "target": "spare", 00:34:47.202 "progress": { 00:34:47.202 "blocks": 23040, 00:34:47.202 "percent": 11 00:34:47.202 } 00:34:47.202 }, 00:34:47.202 "base_bdevs_list": [ 00:34:47.202 { 00:34:47.202 "name": "spare", 00:34:47.202 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:47.202 "is_configured": true, 00:34:47.202 "data_offset": 0, 00:34:47.202 "data_size": 65536 00:34:47.202 }, 00:34:47.202 { 00:34:47.202 "name": "BaseBdev2", 00:34:47.202 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:47.202 "is_configured": true, 00:34:47.202 "data_offset": 0, 00:34:47.202 "data_size": 65536 00:34:47.202 }, 00:34:47.202 { 00:34:47.202 "name": "BaseBdev3", 00:34:47.202 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:47.202 "is_configured": true, 00:34:47.202 "data_offset": 0, 00:34:47.202 "data_size": 65536 00:34:47.202 }, 00:34:47.202 { 00:34:47.202 "name": "BaseBdev4", 00:34:47.202 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:47.202 "is_configured": true, 00:34:47.202 "data_offset": 0, 00:34:47.202 "data_size": 65536 00:34:47.202 } 00:34:47.202 ] 00:34:47.202 }' 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1244 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:47.202 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:47.203 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:47.203 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.461 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:47.461 "name": "raid_bdev1", 00:34:47.461 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:47.461 "strip_size_kb": 64, 00:34:47.461 "state": "online", 00:34:47.461 "raid_level": "raid5f", 00:34:47.461 "superblock": false, 00:34:47.461 "num_base_bdevs": 4, 00:34:47.461 "num_base_bdevs_discovered": 4, 00:34:47.461 "num_base_bdevs_operational": 4, 00:34:47.461 "process": { 00:34:47.461 "type": "rebuild", 00:34:47.461 "target": "spare", 00:34:47.461 "progress": { 00:34:47.461 "blocks": 28800, 00:34:47.461 "percent": 14 00:34:47.461 } 00:34:47.461 }, 00:34:47.461 "base_bdevs_list": [ 00:34:47.461 { 00:34:47.461 "name": "spare", 00:34:47.461 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:47.461 "is_configured": true, 00:34:47.461 "data_offset": 0, 00:34:47.461 "data_size": 65536 00:34:47.461 }, 00:34:47.461 { 00:34:47.461 "name": "BaseBdev2", 00:34:47.461 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:47.461 "is_configured": true, 00:34:47.461 "data_offset": 0, 00:34:47.461 "data_size": 65536 00:34:47.461 }, 00:34:47.461 { 00:34:47.461 "name": "BaseBdev3", 00:34:47.461 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:47.461 "is_configured": true, 00:34:47.461 "data_offset": 0, 00:34:47.461 "data_size": 65536 00:34:47.461 }, 00:34:47.461 { 00:34:47.461 "name": "BaseBdev4", 00:34:47.461 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:47.461 "is_configured": true, 00:34:47.461 "data_offset": 0, 00:34:47.461 "data_size": 65536 00:34:47.461 } 00:34:47.461 ] 00:34:47.461 }' 00:34:47.461 19:01:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:47.720 19:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:47.720 19:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:47.720 19:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:47.720 19:01:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.657 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.916 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:48.916 "name": "raid_bdev1", 00:34:48.916 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:48.916 "strip_size_kb": 64, 00:34:48.916 "state": "online", 00:34:48.916 "raid_level": "raid5f", 00:34:48.916 "superblock": false, 00:34:48.916 "num_base_bdevs": 4, 00:34:48.916 "num_base_bdevs_discovered": 4, 00:34:48.916 "num_base_bdevs_operational": 4, 00:34:48.916 "process": { 00:34:48.916 "type": "rebuild", 00:34:48.916 "target": "spare", 00:34:48.916 "progress": { 00:34:48.916 "blocks": 55680, 00:34:48.916 "percent": 28 00:34:48.916 } 00:34:48.916 }, 00:34:48.916 "base_bdevs_list": [ 00:34:48.916 { 00:34:48.916 "name": "spare", 00:34:48.916 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:48.916 "is_configured": true, 00:34:48.916 "data_offset": 0, 00:34:48.916 "data_size": 65536 00:34:48.916 }, 00:34:48.916 { 00:34:48.916 "name": "BaseBdev2", 00:34:48.916 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:48.916 "is_configured": true, 00:34:48.916 "data_offset": 0, 00:34:48.916 "data_size": 65536 00:34:48.916 }, 00:34:48.916 { 00:34:48.916 "name": "BaseBdev3", 00:34:48.916 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:48.916 "is_configured": true, 00:34:48.916 "data_offset": 0, 00:34:48.916 "data_size": 65536 00:34:48.916 }, 00:34:48.916 { 00:34:48.916 "name": "BaseBdev4", 00:34:48.916 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:48.916 "is_configured": true, 00:34:48.916 "data_offset": 0, 00:34:48.916 "data_size": 65536 00:34:48.916 } 00:34:48.916 ] 00:34:48.916 }' 00:34:48.916 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:48.916 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:48.916 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:48.916 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:48.916 19:01:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.853 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.112 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:50.112 "name": "raid_bdev1", 00:34:50.112 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:50.112 "strip_size_kb": 64, 00:34:50.112 "state": "online", 00:34:50.112 "raid_level": "raid5f", 00:34:50.112 "superblock": false, 00:34:50.112 "num_base_bdevs": 4, 00:34:50.112 "num_base_bdevs_discovered": 4, 00:34:50.112 "num_base_bdevs_operational": 4, 00:34:50.112 "process": { 00:34:50.112 "type": "rebuild", 00:34:50.112 "target": "spare", 00:34:50.112 "progress": { 00:34:50.112 "blocks": 80640, 00:34:50.112 "percent": 41 00:34:50.112 } 00:34:50.112 }, 00:34:50.112 "base_bdevs_list": [ 00:34:50.112 { 00:34:50.112 "name": "spare", 00:34:50.112 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:50.112 "is_configured": true, 00:34:50.112 "data_offset": 0, 00:34:50.112 "data_size": 65536 00:34:50.112 }, 00:34:50.112 { 00:34:50.112 "name": "BaseBdev2", 00:34:50.112 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:50.112 "is_configured": true, 00:34:50.112 "data_offset": 0, 00:34:50.112 "data_size": 65536 00:34:50.112 }, 00:34:50.112 { 00:34:50.112 "name": "BaseBdev3", 00:34:50.112 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:50.112 "is_configured": true, 00:34:50.112 "data_offset": 0, 00:34:50.112 "data_size": 65536 00:34:50.112 }, 00:34:50.112 { 00:34:50.112 "name": "BaseBdev4", 00:34:50.112 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:50.112 "is_configured": true, 00:34:50.112 "data_offset": 0, 00:34:50.112 "data_size": 65536 00:34:50.112 } 00:34:50.112 ] 00:34:50.112 }' 00:34:50.112 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:50.371 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:50.371 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:50.371 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:50.371 19:01:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:51.309 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.568 19:01:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:51.569 "name": "raid_bdev1", 00:34:51.569 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:51.569 "strip_size_kb": 64, 00:34:51.569 "state": "online", 00:34:51.569 "raid_level": "raid5f", 00:34:51.569 "superblock": false, 00:34:51.569 "num_base_bdevs": 4, 00:34:51.569 "num_base_bdevs_discovered": 4, 00:34:51.569 "num_base_bdevs_operational": 4, 00:34:51.569 "process": { 00:34:51.569 "type": "rebuild", 00:34:51.569 "target": "spare", 00:34:51.569 "progress": { 00:34:51.569 "blocks": 105600, 00:34:51.569 "percent": 53 00:34:51.569 } 00:34:51.569 }, 00:34:51.569 "base_bdevs_list": [ 00:34:51.569 { 00:34:51.569 "name": "spare", 00:34:51.569 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:51.569 "is_configured": true, 00:34:51.569 "data_offset": 0, 00:34:51.569 "data_size": 65536 00:34:51.569 }, 00:34:51.569 { 00:34:51.569 "name": "BaseBdev2", 00:34:51.569 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:51.569 "is_configured": true, 00:34:51.569 "data_offset": 0, 00:34:51.569 "data_size": 65536 00:34:51.569 }, 00:34:51.569 { 00:34:51.569 "name": "BaseBdev3", 00:34:51.569 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:51.569 "is_configured": true, 00:34:51.569 "data_offset": 0, 00:34:51.569 "data_size": 65536 00:34:51.569 }, 00:34:51.569 { 00:34:51.569 "name": "BaseBdev4", 00:34:51.569 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:51.569 "is_configured": true, 00:34:51.569 "data_offset": 0, 00:34:51.569 "data_size": 65536 00:34:51.569 } 00:34:51.569 ] 00:34:51.569 }' 00:34:51.569 19:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:51.569 19:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:51.569 19:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:51.569 19:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:51.569 19:01:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:52.948 "name": "raid_bdev1", 00:34:52.948 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:52.948 "strip_size_kb": 64, 00:34:52.948 "state": "online", 00:34:52.948 "raid_level": "raid5f", 00:34:52.948 "superblock": false, 00:34:52.948 "num_base_bdevs": 4, 00:34:52.948 "num_base_bdevs_discovered": 4, 00:34:52.948 "num_base_bdevs_operational": 4, 00:34:52.948 "process": { 00:34:52.948 "type": "rebuild", 00:34:52.948 "target": "spare", 00:34:52.948 "progress": { 00:34:52.948 "blocks": 132480, 00:34:52.948 "percent": 67 00:34:52.948 } 00:34:52.948 }, 00:34:52.948 "base_bdevs_list": [ 00:34:52.948 { 00:34:52.948 "name": "spare", 00:34:52.948 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:52.948 "is_configured": true, 00:34:52.948 "data_offset": 0, 00:34:52.948 "data_size": 65536 00:34:52.948 }, 00:34:52.948 { 00:34:52.948 "name": "BaseBdev2", 00:34:52.948 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:52.948 "is_configured": true, 00:34:52.948 "data_offset": 0, 00:34:52.948 "data_size": 65536 00:34:52.948 }, 00:34:52.948 { 00:34:52.948 "name": "BaseBdev3", 00:34:52.948 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:52.948 "is_configured": true, 00:34:52.948 "data_offset": 0, 00:34:52.948 "data_size": 65536 00:34:52.948 }, 00:34:52.948 { 00:34:52.948 "name": "BaseBdev4", 00:34:52.948 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:52.948 "is_configured": true, 00:34:52.948 "data_offset": 0, 00:34:52.948 "data_size": 65536 00:34:52.948 } 00:34:52.948 ] 00:34:52.948 }' 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:52.948 19:01:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.883 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.142 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:54.142 "name": "raid_bdev1", 00:34:54.142 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:54.142 "strip_size_kb": 64, 00:34:54.142 "state": "online", 00:34:54.142 "raid_level": "raid5f", 00:34:54.142 "superblock": false, 00:34:54.142 "num_base_bdevs": 4, 00:34:54.142 "num_base_bdevs_discovered": 4, 00:34:54.142 "num_base_bdevs_operational": 4, 00:34:54.142 "process": { 00:34:54.142 "type": "rebuild", 00:34:54.142 "target": "spare", 00:34:54.142 "progress": { 00:34:54.142 "blocks": 157440, 00:34:54.142 "percent": 80 00:34:54.142 } 00:34:54.142 }, 00:34:54.142 "base_bdevs_list": [ 00:34:54.142 { 00:34:54.142 "name": "spare", 00:34:54.142 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:54.142 "is_configured": true, 00:34:54.142 "data_offset": 0, 00:34:54.142 "data_size": 65536 00:34:54.142 }, 00:34:54.142 { 00:34:54.142 "name": "BaseBdev2", 00:34:54.142 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:54.142 "is_configured": true, 00:34:54.142 "data_offset": 0, 00:34:54.142 "data_size": 65536 00:34:54.142 }, 00:34:54.142 { 00:34:54.142 "name": "BaseBdev3", 00:34:54.142 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:54.142 "is_configured": true, 00:34:54.142 "data_offset": 0, 00:34:54.142 "data_size": 65536 00:34:54.142 }, 00:34:54.142 { 00:34:54.142 "name": "BaseBdev4", 00:34:54.142 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:54.142 "is_configured": true, 00:34:54.142 "data_offset": 0, 00:34:54.142 "data_size": 65536 00:34:54.142 } 00:34:54.142 ] 00:34:54.142 }' 00:34:54.142 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:54.142 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:54.142 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:54.402 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:54.402 19:01:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.339 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:55.597 19:01:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:55.597 "name": "raid_bdev1", 00:34:55.597 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:55.597 "strip_size_kb": 64, 00:34:55.597 "state": "online", 00:34:55.597 "raid_level": "raid5f", 00:34:55.597 "superblock": false, 00:34:55.597 "num_base_bdevs": 4, 00:34:55.597 "num_base_bdevs_discovered": 4, 00:34:55.597 "num_base_bdevs_operational": 4, 00:34:55.597 "process": { 00:34:55.598 "type": "rebuild", 00:34:55.598 "target": "spare", 00:34:55.598 "progress": { 00:34:55.598 "blocks": 182400, 00:34:55.598 "percent": 92 00:34:55.598 } 00:34:55.598 }, 00:34:55.598 "base_bdevs_list": [ 00:34:55.598 { 00:34:55.598 "name": "spare", 00:34:55.598 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:55.598 "is_configured": true, 00:34:55.598 "data_offset": 0, 00:34:55.598 "data_size": 65536 00:34:55.598 }, 00:34:55.598 { 00:34:55.598 "name": "BaseBdev2", 00:34:55.598 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:55.598 "is_configured": true, 00:34:55.598 "data_offset": 0, 00:34:55.598 "data_size": 65536 00:34:55.598 }, 00:34:55.598 { 00:34:55.598 "name": "BaseBdev3", 00:34:55.598 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:55.598 "is_configured": true, 00:34:55.598 "data_offset": 0, 00:34:55.598 "data_size": 65536 00:34:55.598 }, 00:34:55.598 { 00:34:55.598 "name": "BaseBdev4", 00:34:55.598 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:55.598 "is_configured": true, 00:34:55.598 "data_offset": 0, 00:34:55.598 "data_size": 65536 00:34:55.598 } 00:34:55.598 ] 00:34:55.598 }' 00:34:55.598 19:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:55.598 19:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:55.598 19:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:55.598 19:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:55.598 19:01:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:34:56.535 [2024-07-25 19:01:56.748457] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:56.535 [2024-07-25 19:01:56.748528] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:56.535 [2024-07-25 19:01:56.748589] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.535 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.795 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:56.795 "name": "raid_bdev1", 00:34:56.795 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:56.795 "strip_size_kb": 64, 00:34:56.795 "state": "online", 00:34:56.795 "raid_level": "raid5f", 00:34:56.795 "superblock": false, 00:34:56.795 "num_base_bdevs": 4, 00:34:56.795 "num_base_bdevs_discovered": 4, 00:34:56.795 "num_base_bdevs_operational": 4, 00:34:56.795 "base_bdevs_list": [ 00:34:56.795 { 00:34:56.795 "name": "spare", 00:34:56.795 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:56.795 "is_configured": true, 00:34:56.795 "data_offset": 0, 00:34:56.795 "data_size": 65536 00:34:56.795 }, 00:34:56.795 { 00:34:56.795 "name": "BaseBdev2", 00:34:56.795 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:56.795 "is_configured": true, 00:34:56.795 "data_offset": 0, 00:34:56.795 "data_size": 65536 00:34:56.795 }, 00:34:56.795 { 00:34:56.795 "name": "BaseBdev3", 00:34:56.795 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:56.795 "is_configured": true, 00:34:56.795 "data_offset": 0, 00:34:56.795 "data_size": 65536 00:34:56.795 }, 00:34:56.795 { 00:34:56.795 "name": "BaseBdev4", 00:34:56.795 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:56.795 "is_configured": true, 00:34:56.795 "data_offset": 0, 00:34:56.795 "data_size": 65536 00:34:56.795 } 00:34:56.795 ] 00:34:56.795 }' 00:34:56.795 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.054 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:57.314 "name": "raid_bdev1", 00:34:57.314 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:57.314 "strip_size_kb": 64, 00:34:57.314 "state": "online", 00:34:57.314 "raid_level": "raid5f", 00:34:57.314 "superblock": false, 00:34:57.314 "num_base_bdevs": 4, 00:34:57.314 "num_base_bdevs_discovered": 4, 00:34:57.314 "num_base_bdevs_operational": 4, 00:34:57.314 "base_bdevs_list": [ 00:34:57.314 { 00:34:57.314 "name": "spare", 00:34:57.314 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:57.314 "is_configured": true, 00:34:57.314 "data_offset": 0, 00:34:57.314 "data_size": 65536 00:34:57.314 }, 00:34:57.314 { 00:34:57.314 "name": "BaseBdev2", 00:34:57.314 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:57.314 "is_configured": true, 00:34:57.314 "data_offset": 0, 00:34:57.314 "data_size": 65536 00:34:57.314 }, 00:34:57.314 { 00:34:57.314 "name": "BaseBdev3", 00:34:57.314 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:57.314 "is_configured": true, 00:34:57.314 "data_offset": 0, 00:34:57.314 "data_size": 65536 00:34:57.314 }, 00:34:57.314 { 00:34:57.314 "name": "BaseBdev4", 00:34:57.314 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:57.314 "is_configured": true, 00:34:57.314 "data_offset": 0, 00:34:57.314 "data_size": 65536 00:34:57.314 } 00:34:57.314 ] 00:34:57.314 }' 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.314 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.573 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:57.574 "name": "raid_bdev1", 00:34:57.574 "uuid": "dac1da4d-2432-4668-82ad-5d1a73473d54", 00:34:57.574 "strip_size_kb": 64, 00:34:57.574 "state": "online", 00:34:57.574 "raid_level": "raid5f", 00:34:57.574 "superblock": false, 00:34:57.574 "num_base_bdevs": 4, 00:34:57.574 "num_base_bdevs_discovered": 4, 00:34:57.574 "num_base_bdevs_operational": 4, 00:34:57.574 "base_bdevs_list": [ 00:34:57.574 { 00:34:57.574 "name": "spare", 00:34:57.574 "uuid": "7ae7a86a-7ee4-5a1a-8c24-fe6d77addcfb", 00:34:57.574 "is_configured": true, 00:34:57.574 "data_offset": 0, 00:34:57.574 "data_size": 65536 00:34:57.574 }, 00:34:57.574 { 00:34:57.574 "name": "BaseBdev2", 00:34:57.574 "uuid": "68771766-af60-54e0-9fe3-ffdfaf42ea3e", 00:34:57.574 "is_configured": true, 00:34:57.574 "data_offset": 0, 00:34:57.574 "data_size": 65536 00:34:57.574 }, 00:34:57.574 { 00:34:57.574 "name": "BaseBdev3", 00:34:57.574 "uuid": "7fb1f829-836c-53d0-a8e7-be19b404e7a1", 00:34:57.574 "is_configured": true, 00:34:57.574 "data_offset": 0, 00:34:57.574 "data_size": 65536 00:34:57.574 }, 00:34:57.574 { 00:34:57.574 "name": "BaseBdev4", 00:34:57.574 "uuid": "1179498e-cf7b-5e17-a709-b3e443ead3ac", 00:34:57.574 "is_configured": true, 00:34:57.574 "data_offset": 0, 00:34:57.574 "data_size": 65536 00:34:57.574 } 00:34:57.574 ] 00:34:57.574 }' 00:34:57.574 19:01:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:57.574 19:01:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.142 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:58.142 [2024-07-25 19:01:58.588598] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:58.142 [2024-07-25 19:01:58.588635] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:58.142 [2024-07-25 19:01:58.588748] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:58.142 [2024-07-25 19:01:58.588859] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:58.142 [2024-07-25 19:01:58.588868] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:34:58.142 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.142 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:58.402 19:01:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:58.661 /dev/nbd0 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:58.661 1+0 records in 00:34:58.661 1+0 records out 00:34:58.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738492 s, 5.5 MB/s 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:58.661 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:34:58.921 /dev/nbd1 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:58.921 1+0 records in 00:34:58.921 1+0 records out 00:34:58.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840812 s, 4.9 MB/s 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:58.921 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:59.180 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 156211 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 156211 ']' 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 156211 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 156211 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 156211' 00:34:59.439 killing process with pid 156211 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 156211 00:34:59.439 Received shutdown signal, test time was about 60.000000 seconds 00:34:59.439 00:34:59.439 Latency(us) 00:34:59.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.439 =================================================================================================================== 00:34:59.439 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:59.439 [2024-07-25 19:01:59.989949] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:59.439 19:01:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 156211 00:35:00.007 [2024-07-25 19:02:00.525129] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:01.911 19:02:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:35:01.911 00:35:01.911 real 0m26.215s 00:35:01.911 user 0m36.729s 00:35:01.911 sys 0m3.902s 00:35:01.911 19:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:01.911 19:02:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.911 ************************************ 00:35:01.911 END TEST raid5f_rebuild_test 00:35:01.911 ************************************ 00:35:01.911 19:02:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:35:01.911 19:02:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:35:01.911 19:02:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:01.911 19:02:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:01.911 ************************************ 00:35:01.911 START TEST raid5f_rebuild_test_sb 00:35:01.911 ************************************ 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:35:01.911 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev3 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # echo BaseBdev4 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=156836 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 156836 /var/tmp/spdk-raid.sock 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 156836 ']' 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:01.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:01.912 19:02:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:01.912 [2024-07-25 19:02:02.185506] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:01.912 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:01.912 Zero copy mechanism will not be used. 00:35:01.912 [2024-07-25 19:02:02.185805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156836 ] 00:35:01.912 [2024-07-25 19:02:02.379013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.170 [2024-07-25 19:02:02.629919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.430 [2024-07-25 19:02:02.893961] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:02.688 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:02.688 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:35:02.688 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:35:02.688 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:02.954 BaseBdev1_malloc 00:35:02.954 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:03.212 [2024-07-25 19:02:03.662655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:03.212 [2024-07-25 19:02:03.662760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:03.212 [2024-07-25 19:02:03.662796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:35:03.212 [2024-07-25 19:02:03.662816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:03.212 [2024-07-25 19:02:03.665325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:03.212 [2024-07-25 19:02:03.665371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:03.212 BaseBdev1 00:35:03.212 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:35:03.212 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:03.470 BaseBdev2_malloc 00:35:03.470 19:02:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:03.728 [2024-07-25 19:02:04.063646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:03.728 [2024-07-25 19:02:04.063743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:03.728 [2024-07-25 19:02:04.063779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:03.728 [2024-07-25 19:02:04.063798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:03.728 [2024-07-25 19:02:04.066252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:03.728 [2024-07-25 19:02:04.066300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:03.728 BaseBdev2 00:35:03.728 19:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:35:03.728 19:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:03.728 BaseBdev3_malloc 00:35:03.728 19:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:03.987 [2024-07-25 19:02:04.469281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:03.987 [2024-07-25 19:02:04.469534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:03.987 [2024-07-25 19:02:04.470092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:03.987 [2024-07-25 19:02:04.470478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:03.987 [2024-07-25 19:02:04.477297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:03.987 [2024-07-25 19:02:04.477712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:03.987 BaseBdev3 00:35:03.987 19:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:35:03.987 19:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:04.245 BaseBdev4_malloc 00:35:04.245 19:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:04.504 [2024-07-25 19:02:04.886681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:04.504 [2024-07-25 19:02:04.886930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:04.504 [2024-07-25 19:02:04.886999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:35:04.504 [2024-07-25 19:02:04.887159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:04.504 [2024-07-25 19:02:04.889650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:04.504 [2024-07-25 19:02:04.889842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:04.504 BaseBdev4 00:35:04.504 19:02:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:04.764 spare_malloc 00:35:04.764 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:04.764 spare_delay 00:35:04.764 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:05.022 [2024-07-25 19:02:05.462017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:05.022 [2024-07-25 19:02:05.462298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:05.022 [2024-07-25 19:02:05.462364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:05.022 [2024-07-25 19:02:05.462480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:05.022 [2024-07-25 19:02:05.465013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:05.022 [2024-07-25 19:02:05.465159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:05.022 spare 00:35:05.022 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:35:05.281 [2024-07-25 19:02:05.666472] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:05.281 [2024-07-25 19:02:05.668510] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:05.281 [2024-07-25 19:02:05.668692] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:05.281 [2024-07-25 19:02:05.668767] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:05.281 [2024-07-25 19:02:05.669050] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:35:05.281 [2024-07-25 19:02:05.669138] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:05.281 [2024-07-25 19:02:05.669285] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:05.281 [2024-07-25 19:02:05.677708] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:35:05.281 [2024-07-25 19:02:05.677828] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:35:05.281 [2024-07-25 19:02:05.678123] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.281 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.540 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:05.540 "name": "raid_bdev1", 00:35:05.540 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:05.540 "strip_size_kb": 64, 00:35:05.540 "state": "online", 00:35:05.540 "raid_level": "raid5f", 00:35:05.540 "superblock": true, 00:35:05.540 "num_base_bdevs": 4, 00:35:05.540 "num_base_bdevs_discovered": 4, 00:35:05.540 "num_base_bdevs_operational": 4, 00:35:05.540 "base_bdevs_list": [ 00:35:05.540 { 00:35:05.540 "name": "BaseBdev1", 00:35:05.540 "uuid": "78661f8e-ae09-5f0d-b1e0-1ef90c419138", 00:35:05.540 "is_configured": true, 00:35:05.540 "data_offset": 2048, 00:35:05.540 "data_size": 63488 00:35:05.540 }, 00:35:05.540 { 00:35:05.540 "name": "BaseBdev2", 00:35:05.540 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:05.540 "is_configured": true, 00:35:05.540 "data_offset": 2048, 00:35:05.540 "data_size": 63488 00:35:05.540 }, 00:35:05.540 { 00:35:05.540 "name": "BaseBdev3", 00:35:05.540 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:05.540 "is_configured": true, 00:35:05.540 "data_offset": 2048, 00:35:05.540 "data_size": 63488 00:35:05.540 }, 00:35:05.540 { 00:35:05.540 "name": "BaseBdev4", 00:35:05.540 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:05.540 "is_configured": true, 00:35:05.540 "data_offset": 2048, 00:35:05.540 "data_size": 63488 00:35:05.540 } 00:35:05.540 ] 00:35:05.540 }' 00:35:05.540 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:05.540 19:02:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.799 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:05.799 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:35:06.060 [2024-07-25 19:02:06.623556] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:06.060 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=190464 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:06.352 19:02:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:06.643 [2024-07-25 19:02:07.143605] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:35:06.643 /dev/nbd0 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:06.643 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:06.644 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:06.644 1+0 records in 00:35:06.644 1+0 records out 00:35:06.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969086 s, 4.2 MB/s 00:35:06.644 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 192 00:35:06.902 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:35:07.469 496+0 records in 00:35:07.469 496+0 records out 00:35:07.469 97517568 bytes (98 MB, 93 MiB) copied, 0.577421 s, 169 MB/s 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:07.469 19:02:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:07.469 [2024-07-25 19:02:08.010356] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.469 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:07.728 [2024-07-25 19:02:08.252150] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:07.728 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:07.729 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:07.729 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.729 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.987 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:07.987 "name": "raid_bdev1", 00:35:07.987 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:07.987 "strip_size_kb": 64, 00:35:07.987 "state": "online", 00:35:07.987 "raid_level": "raid5f", 00:35:07.987 "superblock": true, 00:35:07.987 "num_base_bdevs": 4, 00:35:07.987 "num_base_bdevs_discovered": 3, 00:35:07.987 "num_base_bdevs_operational": 3, 00:35:07.987 "base_bdevs_list": [ 00:35:07.987 { 00:35:07.987 "name": null, 00:35:07.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.987 "is_configured": false, 00:35:07.987 "data_offset": 2048, 00:35:07.987 "data_size": 63488 00:35:07.987 }, 00:35:07.987 { 00:35:07.987 "name": "BaseBdev2", 00:35:07.987 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:07.987 "is_configured": true, 00:35:07.987 "data_offset": 2048, 00:35:07.987 "data_size": 63488 00:35:07.987 }, 00:35:07.987 { 00:35:07.987 "name": "BaseBdev3", 00:35:07.987 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:07.987 "is_configured": true, 00:35:07.987 "data_offset": 2048, 00:35:07.987 "data_size": 63488 00:35:07.987 }, 00:35:07.987 { 00:35:07.987 "name": "BaseBdev4", 00:35:07.987 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:07.987 "is_configured": true, 00:35:07.987 "data_offset": 2048, 00:35:07.987 "data_size": 63488 00:35:07.987 } 00:35:07.987 ] 00:35:07.987 }' 00:35:07.987 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:07.987 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.556 19:02:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:08.556 [2024-07-25 19:02:09.108264] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:08.556 [2024-07-25 19:02:09.125203] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:35:08.556 [2024-07-25 19:02:09.136015] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:08.815 19:02:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:09.753 "name": "raid_bdev1", 00:35:09.753 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:09.753 "strip_size_kb": 64, 00:35:09.753 "state": "online", 00:35:09.753 "raid_level": "raid5f", 00:35:09.753 "superblock": true, 00:35:09.753 "num_base_bdevs": 4, 00:35:09.753 "num_base_bdevs_discovered": 4, 00:35:09.753 "num_base_bdevs_operational": 4, 00:35:09.753 "process": { 00:35:09.753 "type": "rebuild", 00:35:09.753 "target": "spare", 00:35:09.753 "progress": { 00:35:09.753 "blocks": 21120, 00:35:09.753 "percent": 11 00:35:09.753 } 00:35:09.753 }, 00:35:09.753 "base_bdevs_list": [ 00:35:09.753 { 00:35:09.753 "name": "spare", 00:35:09.753 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:09.753 "is_configured": true, 00:35:09.753 "data_offset": 2048, 00:35:09.753 "data_size": 63488 00:35:09.753 }, 00:35:09.753 { 00:35:09.753 "name": "BaseBdev2", 00:35:09.753 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:09.753 "is_configured": true, 00:35:09.753 "data_offset": 2048, 00:35:09.753 "data_size": 63488 00:35:09.753 }, 00:35:09.753 { 00:35:09.753 "name": "BaseBdev3", 00:35:09.753 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:09.753 "is_configured": true, 00:35:09.753 "data_offset": 2048, 00:35:09.753 "data_size": 63488 00:35:09.753 }, 00:35:09.753 { 00:35:09.753 "name": "BaseBdev4", 00:35:09.753 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:09.753 "is_configured": true, 00:35:09.753 "data_offset": 2048, 00:35:09.753 "data_size": 63488 00:35:09.753 } 00:35:09.753 ] 00:35:09.753 }' 00:35:09.753 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:10.012 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:10.012 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:10.012 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:10.012 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:10.012 [2024-07-25 19:02:10.553372] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:10.271 [2024-07-25 19:02:10.646811] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:10.271 [2024-07-25 19:02:10.647037] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:10.271 [2024-07-25 19:02:10.647089] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:10.271 [2024-07-25 19:02:10.647256] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.271 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.528 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:10.528 "name": "raid_bdev1", 00:35:10.528 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:10.528 "strip_size_kb": 64, 00:35:10.528 "state": "online", 00:35:10.528 "raid_level": "raid5f", 00:35:10.528 "superblock": true, 00:35:10.528 "num_base_bdevs": 4, 00:35:10.528 "num_base_bdevs_discovered": 3, 00:35:10.528 "num_base_bdevs_operational": 3, 00:35:10.528 "base_bdevs_list": [ 00:35:10.528 { 00:35:10.528 "name": null, 00:35:10.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.528 "is_configured": false, 00:35:10.528 "data_offset": 2048, 00:35:10.528 "data_size": 63488 00:35:10.528 }, 00:35:10.528 { 00:35:10.528 "name": "BaseBdev2", 00:35:10.528 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:10.528 "is_configured": true, 00:35:10.528 "data_offset": 2048, 00:35:10.528 "data_size": 63488 00:35:10.528 }, 00:35:10.528 { 00:35:10.528 "name": "BaseBdev3", 00:35:10.528 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:10.528 "is_configured": true, 00:35:10.528 "data_offset": 2048, 00:35:10.528 "data_size": 63488 00:35:10.528 }, 00:35:10.528 { 00:35:10.528 "name": "BaseBdev4", 00:35:10.528 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:10.528 "is_configured": true, 00:35:10.528 "data_offset": 2048, 00:35:10.528 "data_size": 63488 00:35:10.528 } 00:35:10.528 ] 00:35:10.528 }' 00:35:10.528 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:10.528 19:02:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:11.093 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:11.093 "name": "raid_bdev1", 00:35:11.093 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:11.093 "strip_size_kb": 64, 00:35:11.093 "state": "online", 00:35:11.093 "raid_level": "raid5f", 00:35:11.093 "superblock": true, 00:35:11.093 "num_base_bdevs": 4, 00:35:11.093 "num_base_bdevs_discovered": 3, 00:35:11.093 "num_base_bdevs_operational": 3, 00:35:11.093 "base_bdevs_list": [ 00:35:11.093 { 00:35:11.093 "name": null, 00:35:11.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.093 "is_configured": false, 00:35:11.093 "data_offset": 2048, 00:35:11.093 "data_size": 63488 00:35:11.093 }, 00:35:11.093 { 00:35:11.093 "name": "BaseBdev2", 00:35:11.093 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:11.093 "is_configured": true, 00:35:11.093 "data_offset": 2048, 00:35:11.093 "data_size": 63488 00:35:11.093 }, 00:35:11.093 { 00:35:11.093 "name": "BaseBdev3", 00:35:11.093 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:11.093 "is_configured": true, 00:35:11.093 "data_offset": 2048, 00:35:11.093 "data_size": 63488 00:35:11.093 }, 00:35:11.093 { 00:35:11.093 "name": "BaseBdev4", 00:35:11.093 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:11.093 "is_configured": true, 00:35:11.093 "data_offset": 2048, 00:35:11.093 "data_size": 63488 00:35:11.093 } 00:35:11.093 ] 00:35:11.093 }' 00:35:11.094 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:11.352 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:11.352 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:11.352 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:11.352 19:02:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:11.611 [2024-07-25 19:02:11.986520] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:11.611 [2024-07-25 19:02:12.003053] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:35:11.611 [2024-07-25 19:02:12.013523] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:11.611 19:02:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:35:12.548 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:12.548 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:12.548 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:12.548 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:12.548 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:12.548 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.548 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:12.806 "name": "raid_bdev1", 00:35:12.806 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:12.806 "strip_size_kb": 64, 00:35:12.806 "state": "online", 00:35:12.806 "raid_level": "raid5f", 00:35:12.806 "superblock": true, 00:35:12.806 "num_base_bdevs": 4, 00:35:12.806 "num_base_bdevs_discovered": 4, 00:35:12.806 "num_base_bdevs_operational": 4, 00:35:12.806 "process": { 00:35:12.806 "type": "rebuild", 00:35:12.806 "target": "spare", 00:35:12.806 "progress": { 00:35:12.806 "blocks": 23040, 00:35:12.806 "percent": 12 00:35:12.806 } 00:35:12.806 }, 00:35:12.806 "base_bdevs_list": [ 00:35:12.806 { 00:35:12.806 "name": "spare", 00:35:12.806 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:12.806 "is_configured": true, 00:35:12.806 "data_offset": 2048, 00:35:12.806 "data_size": 63488 00:35:12.806 }, 00:35:12.806 { 00:35:12.806 "name": "BaseBdev2", 00:35:12.806 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:12.806 "is_configured": true, 00:35:12.806 "data_offset": 2048, 00:35:12.806 "data_size": 63488 00:35:12.806 }, 00:35:12.806 { 00:35:12.806 "name": "BaseBdev3", 00:35:12.806 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:12.806 "is_configured": true, 00:35:12.806 "data_offset": 2048, 00:35:12.806 "data_size": 63488 00:35:12.806 }, 00:35:12.806 { 00:35:12.806 "name": "BaseBdev4", 00:35:12.806 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:12.806 "is_configured": true, 00:35:12.806 "data_offset": 2048, 00:35:12.806 "data_size": 63488 00:35:12.806 } 00:35:12.806 ] 00:35:12.806 }' 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:35:12.806 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1270 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:12.806 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.807 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:13.064 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:13.064 "name": "raid_bdev1", 00:35:13.064 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:13.064 "strip_size_kb": 64, 00:35:13.064 "state": "online", 00:35:13.064 "raid_level": "raid5f", 00:35:13.064 "superblock": true, 00:35:13.064 "num_base_bdevs": 4, 00:35:13.064 "num_base_bdevs_discovered": 4, 00:35:13.064 "num_base_bdevs_operational": 4, 00:35:13.064 "process": { 00:35:13.064 "type": "rebuild", 00:35:13.064 "target": "spare", 00:35:13.064 "progress": { 00:35:13.064 "blocks": 28800, 00:35:13.064 "percent": 15 00:35:13.064 } 00:35:13.064 }, 00:35:13.064 "base_bdevs_list": [ 00:35:13.064 { 00:35:13.064 "name": "spare", 00:35:13.064 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:13.064 "is_configured": true, 00:35:13.064 "data_offset": 2048, 00:35:13.064 "data_size": 63488 00:35:13.064 }, 00:35:13.064 { 00:35:13.064 "name": "BaseBdev2", 00:35:13.064 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:13.064 "is_configured": true, 00:35:13.064 "data_offset": 2048, 00:35:13.064 "data_size": 63488 00:35:13.064 }, 00:35:13.064 { 00:35:13.064 "name": "BaseBdev3", 00:35:13.064 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:13.064 "is_configured": true, 00:35:13.064 "data_offset": 2048, 00:35:13.064 "data_size": 63488 00:35:13.064 }, 00:35:13.064 { 00:35:13.064 "name": "BaseBdev4", 00:35:13.064 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:13.064 "is_configured": true, 00:35:13.064 "data_offset": 2048, 00:35:13.064 "data_size": 63488 00:35:13.064 } 00:35:13.064 ] 00:35:13.064 }' 00:35:13.064 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:13.064 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:13.064 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:13.321 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:13.321 19:02:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:14.254 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:14.513 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:14.513 "name": "raid_bdev1", 00:35:14.513 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:14.513 "strip_size_kb": 64, 00:35:14.513 "state": "online", 00:35:14.513 "raid_level": "raid5f", 00:35:14.513 "superblock": true, 00:35:14.513 "num_base_bdevs": 4, 00:35:14.513 "num_base_bdevs_discovered": 4, 00:35:14.513 "num_base_bdevs_operational": 4, 00:35:14.513 "process": { 00:35:14.513 "type": "rebuild", 00:35:14.513 "target": "spare", 00:35:14.513 "progress": { 00:35:14.513 "blocks": 53760, 00:35:14.513 "percent": 28 00:35:14.513 } 00:35:14.513 }, 00:35:14.513 "base_bdevs_list": [ 00:35:14.513 { 00:35:14.513 "name": "spare", 00:35:14.513 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:14.513 "is_configured": true, 00:35:14.513 "data_offset": 2048, 00:35:14.513 "data_size": 63488 00:35:14.513 }, 00:35:14.513 { 00:35:14.513 "name": "BaseBdev2", 00:35:14.513 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:14.513 "is_configured": true, 00:35:14.513 "data_offset": 2048, 00:35:14.513 "data_size": 63488 00:35:14.513 }, 00:35:14.513 { 00:35:14.513 "name": "BaseBdev3", 00:35:14.513 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:14.513 "is_configured": true, 00:35:14.513 "data_offset": 2048, 00:35:14.514 "data_size": 63488 00:35:14.514 }, 00:35:14.514 { 00:35:14.514 "name": "BaseBdev4", 00:35:14.514 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:14.514 "is_configured": true, 00:35:14.514 "data_offset": 2048, 00:35:14.514 "data_size": 63488 00:35:14.514 } 00:35:14.514 ] 00:35:14.514 }' 00:35:14.514 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:14.514 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:14.514 19:02:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:14.514 19:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:14.514 19:02:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:35:15.451 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:15.451 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:15.451 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:15.709 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:15.709 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:15.709 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:15.709 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.709 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.709 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:15.709 "name": "raid_bdev1", 00:35:15.709 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:15.709 "strip_size_kb": 64, 00:35:15.709 "state": "online", 00:35:15.709 "raid_level": "raid5f", 00:35:15.709 "superblock": true, 00:35:15.709 "num_base_bdevs": 4, 00:35:15.709 "num_base_bdevs_discovered": 4, 00:35:15.709 "num_base_bdevs_operational": 4, 00:35:15.709 "process": { 00:35:15.709 "type": "rebuild", 00:35:15.709 "target": "spare", 00:35:15.709 "progress": { 00:35:15.709 "blocks": 80640, 00:35:15.709 "percent": 42 00:35:15.709 } 00:35:15.709 }, 00:35:15.709 "base_bdevs_list": [ 00:35:15.709 { 00:35:15.709 "name": "spare", 00:35:15.709 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:15.709 "is_configured": true, 00:35:15.709 "data_offset": 2048, 00:35:15.709 "data_size": 63488 00:35:15.709 }, 00:35:15.709 { 00:35:15.709 "name": "BaseBdev2", 00:35:15.709 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:15.709 "is_configured": true, 00:35:15.709 "data_offset": 2048, 00:35:15.709 "data_size": 63488 00:35:15.709 }, 00:35:15.709 { 00:35:15.709 "name": "BaseBdev3", 00:35:15.709 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:15.709 "is_configured": true, 00:35:15.709 "data_offset": 2048, 00:35:15.709 "data_size": 63488 00:35:15.709 }, 00:35:15.709 { 00:35:15.709 "name": "BaseBdev4", 00:35:15.709 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:15.709 "is_configured": true, 00:35:15.709 "data_offset": 2048, 00:35:15.709 "data_size": 63488 00:35:15.709 } 00:35:15.709 ] 00:35:15.709 }' 00:35:15.709 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:15.968 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:15.968 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:15.968 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:15.968 19:02:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:35:16.902 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:16.902 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:16.902 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:16.902 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:16.902 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:16.903 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:16.903 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.903 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.161 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:17.161 "name": "raid_bdev1", 00:35:17.161 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:17.161 "strip_size_kb": 64, 00:35:17.161 "state": "online", 00:35:17.161 "raid_level": "raid5f", 00:35:17.161 "superblock": true, 00:35:17.161 "num_base_bdevs": 4, 00:35:17.161 "num_base_bdevs_discovered": 4, 00:35:17.161 "num_base_bdevs_operational": 4, 00:35:17.161 "process": { 00:35:17.161 "type": "rebuild", 00:35:17.161 "target": "spare", 00:35:17.161 "progress": { 00:35:17.161 "blocks": 105600, 00:35:17.161 "percent": 55 00:35:17.161 } 00:35:17.161 }, 00:35:17.161 "base_bdevs_list": [ 00:35:17.161 { 00:35:17.161 "name": "spare", 00:35:17.161 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:17.161 "is_configured": true, 00:35:17.161 "data_offset": 2048, 00:35:17.161 "data_size": 63488 00:35:17.161 }, 00:35:17.161 { 00:35:17.161 "name": "BaseBdev2", 00:35:17.161 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:17.161 "is_configured": true, 00:35:17.161 "data_offset": 2048, 00:35:17.161 "data_size": 63488 00:35:17.161 }, 00:35:17.161 { 00:35:17.161 "name": "BaseBdev3", 00:35:17.161 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:17.161 "is_configured": true, 00:35:17.161 "data_offset": 2048, 00:35:17.161 "data_size": 63488 00:35:17.161 }, 00:35:17.161 { 00:35:17.161 "name": "BaseBdev4", 00:35:17.161 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:17.161 "is_configured": true, 00:35:17.161 "data_offset": 2048, 00:35:17.161 "data_size": 63488 00:35:17.161 } 00:35:17.161 ] 00:35:17.161 }' 00:35:17.161 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:17.161 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:17.161 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:17.161 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:17.161 19:02:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.097 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.356 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:18.356 "name": "raid_bdev1", 00:35:18.356 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:18.356 "strip_size_kb": 64, 00:35:18.356 "state": "online", 00:35:18.356 "raid_level": "raid5f", 00:35:18.356 "superblock": true, 00:35:18.356 "num_base_bdevs": 4, 00:35:18.356 "num_base_bdevs_discovered": 4, 00:35:18.356 "num_base_bdevs_operational": 4, 00:35:18.356 "process": { 00:35:18.356 "type": "rebuild", 00:35:18.356 "target": "spare", 00:35:18.356 "progress": { 00:35:18.356 "blocks": 128640, 00:35:18.356 "percent": 67 00:35:18.356 } 00:35:18.356 }, 00:35:18.356 "base_bdevs_list": [ 00:35:18.356 { 00:35:18.356 "name": "spare", 00:35:18.356 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:18.356 "is_configured": true, 00:35:18.356 "data_offset": 2048, 00:35:18.356 "data_size": 63488 00:35:18.356 }, 00:35:18.356 { 00:35:18.356 "name": "BaseBdev2", 00:35:18.356 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:18.356 "is_configured": true, 00:35:18.356 "data_offset": 2048, 00:35:18.356 "data_size": 63488 00:35:18.356 }, 00:35:18.356 { 00:35:18.356 "name": "BaseBdev3", 00:35:18.356 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:18.356 "is_configured": true, 00:35:18.356 "data_offset": 2048, 00:35:18.356 "data_size": 63488 00:35:18.356 }, 00:35:18.356 { 00:35:18.356 "name": "BaseBdev4", 00:35:18.356 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:18.356 "is_configured": true, 00:35:18.356 "data_offset": 2048, 00:35:18.356 "data_size": 63488 00:35:18.356 } 00:35:18.356 ] 00:35:18.356 }' 00:35:18.356 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:18.356 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:18.356 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:18.356 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:18.616 19:02:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.552 19:02:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.810 19:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:19.810 "name": "raid_bdev1", 00:35:19.810 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:19.810 "strip_size_kb": 64, 00:35:19.810 "state": "online", 00:35:19.810 "raid_level": "raid5f", 00:35:19.810 "superblock": true, 00:35:19.810 "num_base_bdevs": 4, 00:35:19.810 "num_base_bdevs_discovered": 4, 00:35:19.810 "num_base_bdevs_operational": 4, 00:35:19.810 "process": { 00:35:19.810 "type": "rebuild", 00:35:19.810 "target": "spare", 00:35:19.810 "progress": { 00:35:19.810 "blocks": 153600, 00:35:19.810 "percent": 80 00:35:19.810 } 00:35:19.810 }, 00:35:19.810 "base_bdevs_list": [ 00:35:19.810 { 00:35:19.810 "name": "spare", 00:35:19.810 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:19.810 "is_configured": true, 00:35:19.811 "data_offset": 2048, 00:35:19.811 "data_size": 63488 00:35:19.811 }, 00:35:19.811 { 00:35:19.811 "name": "BaseBdev2", 00:35:19.811 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:19.811 "is_configured": true, 00:35:19.811 "data_offset": 2048, 00:35:19.811 "data_size": 63488 00:35:19.811 }, 00:35:19.811 { 00:35:19.811 "name": "BaseBdev3", 00:35:19.811 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:19.811 "is_configured": true, 00:35:19.811 "data_offset": 2048, 00:35:19.811 "data_size": 63488 00:35:19.811 }, 00:35:19.811 { 00:35:19.811 "name": "BaseBdev4", 00:35:19.811 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:19.811 "is_configured": true, 00:35:19.811 "data_offset": 2048, 00:35:19.811 "data_size": 63488 00:35:19.811 } 00:35:19.811 ] 00:35:19.811 }' 00:35:19.811 19:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:19.811 19:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:19.811 19:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:19.811 19:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:19.811 19:02:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.748 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:21.006 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:21.006 "name": "raid_bdev1", 00:35:21.006 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:21.006 "strip_size_kb": 64, 00:35:21.006 "state": "online", 00:35:21.006 "raid_level": "raid5f", 00:35:21.006 "superblock": true, 00:35:21.006 "num_base_bdevs": 4, 00:35:21.006 "num_base_bdevs_discovered": 4, 00:35:21.006 "num_base_bdevs_operational": 4, 00:35:21.006 "process": { 00:35:21.006 "type": "rebuild", 00:35:21.006 "target": "spare", 00:35:21.006 "progress": { 00:35:21.006 "blocks": 180480, 00:35:21.006 "percent": 94 00:35:21.006 } 00:35:21.006 }, 00:35:21.006 "base_bdevs_list": [ 00:35:21.006 { 00:35:21.006 "name": "spare", 00:35:21.006 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:21.006 "is_configured": true, 00:35:21.006 "data_offset": 2048, 00:35:21.006 "data_size": 63488 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "name": "BaseBdev2", 00:35:21.006 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:21.006 "is_configured": true, 00:35:21.006 "data_offset": 2048, 00:35:21.006 "data_size": 63488 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "name": "BaseBdev3", 00:35:21.006 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:21.006 "is_configured": true, 00:35:21.006 "data_offset": 2048, 00:35:21.006 "data_size": 63488 00:35:21.006 }, 00:35:21.006 { 00:35:21.006 "name": "BaseBdev4", 00:35:21.006 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:21.006 "is_configured": true, 00:35:21.006 "data_offset": 2048, 00:35:21.006 "data_size": 63488 00:35:21.006 } 00:35:21.006 ] 00:35:21.006 }' 00:35:21.006 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:21.006 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:21.006 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:21.265 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:21.265 19:02:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:35:21.524 [2024-07-25 19:02:22.082578] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:21.524 [2024-07-25 19:02:22.082788] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:21.524 [2024-07-25 19:02:22.083062] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.091 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.350 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:22.350 "name": "raid_bdev1", 00:35:22.350 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:22.350 "strip_size_kb": 64, 00:35:22.350 "state": "online", 00:35:22.350 "raid_level": "raid5f", 00:35:22.350 "superblock": true, 00:35:22.350 "num_base_bdevs": 4, 00:35:22.350 "num_base_bdevs_discovered": 4, 00:35:22.350 "num_base_bdevs_operational": 4, 00:35:22.350 "base_bdevs_list": [ 00:35:22.350 { 00:35:22.350 "name": "spare", 00:35:22.350 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:22.350 "is_configured": true, 00:35:22.350 "data_offset": 2048, 00:35:22.350 "data_size": 63488 00:35:22.350 }, 00:35:22.350 { 00:35:22.350 "name": "BaseBdev2", 00:35:22.350 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:22.350 "is_configured": true, 00:35:22.350 "data_offset": 2048, 00:35:22.350 "data_size": 63488 00:35:22.350 }, 00:35:22.350 { 00:35:22.350 "name": "BaseBdev3", 00:35:22.350 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:22.350 "is_configured": true, 00:35:22.350 "data_offset": 2048, 00:35:22.350 "data_size": 63488 00:35:22.350 }, 00:35:22.350 { 00:35:22.350 "name": "BaseBdev4", 00:35:22.350 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:22.350 "is_configured": true, 00:35:22.350 "data_offset": 2048, 00:35:22.350 "data_size": 63488 00:35:22.350 } 00:35:22.350 ] 00:35:22.350 }' 00:35:22.350 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:22.350 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:22.350 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.609 19:02:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:22.868 "name": "raid_bdev1", 00:35:22.868 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:22.868 "strip_size_kb": 64, 00:35:22.868 "state": "online", 00:35:22.868 "raid_level": "raid5f", 00:35:22.868 "superblock": true, 00:35:22.868 "num_base_bdevs": 4, 00:35:22.868 "num_base_bdevs_discovered": 4, 00:35:22.868 "num_base_bdevs_operational": 4, 00:35:22.868 "base_bdevs_list": [ 00:35:22.868 { 00:35:22.868 "name": "spare", 00:35:22.868 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:22.868 "is_configured": true, 00:35:22.868 "data_offset": 2048, 00:35:22.868 "data_size": 63488 00:35:22.868 }, 00:35:22.868 { 00:35:22.868 "name": "BaseBdev2", 00:35:22.868 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:22.868 "is_configured": true, 00:35:22.868 "data_offset": 2048, 00:35:22.868 "data_size": 63488 00:35:22.868 }, 00:35:22.868 { 00:35:22.868 "name": "BaseBdev3", 00:35:22.868 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:22.868 "is_configured": true, 00:35:22.868 "data_offset": 2048, 00:35:22.868 "data_size": 63488 00:35:22.868 }, 00:35:22.868 { 00:35:22.868 "name": "BaseBdev4", 00:35:22.868 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:22.868 "is_configured": true, 00:35:22.868 "data_offset": 2048, 00:35:22.868 "data_size": 63488 00:35:22.868 } 00:35:22.868 ] 00:35:22.868 }' 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.868 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.126 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:23.126 "name": "raid_bdev1", 00:35:23.126 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:23.126 "strip_size_kb": 64, 00:35:23.126 "state": "online", 00:35:23.126 "raid_level": "raid5f", 00:35:23.126 "superblock": true, 00:35:23.126 "num_base_bdevs": 4, 00:35:23.126 "num_base_bdevs_discovered": 4, 00:35:23.126 "num_base_bdevs_operational": 4, 00:35:23.126 "base_bdevs_list": [ 00:35:23.126 { 00:35:23.126 "name": "spare", 00:35:23.126 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:23.126 "is_configured": true, 00:35:23.126 "data_offset": 2048, 00:35:23.126 "data_size": 63488 00:35:23.126 }, 00:35:23.126 { 00:35:23.126 "name": "BaseBdev2", 00:35:23.126 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:23.126 "is_configured": true, 00:35:23.126 "data_offset": 2048, 00:35:23.126 "data_size": 63488 00:35:23.126 }, 00:35:23.126 { 00:35:23.126 "name": "BaseBdev3", 00:35:23.126 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:23.126 "is_configured": true, 00:35:23.126 "data_offset": 2048, 00:35:23.126 "data_size": 63488 00:35:23.126 }, 00:35:23.126 { 00:35:23.126 "name": "BaseBdev4", 00:35:23.126 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:23.126 "is_configured": true, 00:35:23.126 "data_offset": 2048, 00:35:23.126 "data_size": 63488 00:35:23.126 } 00:35:23.126 ] 00:35:23.126 }' 00:35:23.126 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:23.126 19:02:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:23.692 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:23.692 [2024-07-25 19:02:24.263269] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:23.692 [2024-07-25 19:02:24.263425] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:23.692 [2024-07-25 19:02:24.263659] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:23.692 [2024-07-25 19:02:24.263856] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:23.692 [2024-07-25 19:02:24.263939] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:35:23.949 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.949 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:24.206 /dev/nbd0 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:35:24.206 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:24.465 1+0 records in 00:35:24.465 1+0 records out 00:35:24.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769863 s, 5.3 MB/s 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:24.465 19:02:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:24.724 /dev/nbd1 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:24.724 1+0 records in 00:35:24.724 1+0 records out 00:35:24.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287577 s, 14.2 MB/s 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:24.724 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:24.983 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:25.242 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:25.243 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:25.243 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:25.501 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:35:25.502 19:02:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:25.502 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:25.760 [2024-07-25 19:02:26.319917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:25.760 [2024-07-25 19:02:26.320008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:25.760 [2024-07-25 19:02:26.320077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:25.760 [2024-07-25 19:02:26.320109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:25.760 [2024-07-25 19:02:26.322752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:25.760 [2024-07-25 19:02:26.322811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:25.760 [2024-07-25 19:02:26.322937] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:25.760 [2024-07-25 19:02:26.323006] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:25.760 [2024-07-25 19:02:26.323150] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:25.760 [2024-07-25 19:02:26.323234] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:25.760 [2024-07-25 19:02:26.323311] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:25.760 spare 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:25.760 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:26.019 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.019 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.019 [2024-07-25 19:02:26.423401] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:35:26.019 [2024-07-25 19:02:26.423421] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:26.019 [2024-07-25 19:02:26.423562] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049440 00:35:26.019 [2024-07-25 19:02:26.431636] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:35:26.019 [2024-07-25 19:02:26.431660] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:35:26.019 [2024-07-25 19:02:26.431839] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:26.019 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:26.019 "name": "raid_bdev1", 00:35:26.019 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:26.019 "strip_size_kb": 64, 00:35:26.019 "state": "online", 00:35:26.019 "raid_level": "raid5f", 00:35:26.019 "superblock": true, 00:35:26.019 "num_base_bdevs": 4, 00:35:26.019 "num_base_bdevs_discovered": 4, 00:35:26.019 "num_base_bdevs_operational": 4, 00:35:26.019 "base_bdevs_list": [ 00:35:26.019 { 00:35:26.019 "name": "spare", 00:35:26.019 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:26.019 "is_configured": true, 00:35:26.019 "data_offset": 2048, 00:35:26.019 "data_size": 63488 00:35:26.019 }, 00:35:26.019 { 00:35:26.019 "name": "BaseBdev2", 00:35:26.019 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:26.019 "is_configured": true, 00:35:26.019 "data_offset": 2048, 00:35:26.019 "data_size": 63488 00:35:26.019 }, 00:35:26.019 { 00:35:26.019 "name": "BaseBdev3", 00:35:26.019 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:26.019 "is_configured": true, 00:35:26.019 "data_offset": 2048, 00:35:26.019 "data_size": 63488 00:35:26.019 }, 00:35:26.019 { 00:35:26.019 "name": "BaseBdev4", 00:35:26.019 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:26.019 "is_configured": true, 00:35:26.019 "data_offset": 2048, 00:35:26.019 "data_size": 63488 00:35:26.019 } 00:35:26.019 ] 00:35:26.019 }' 00:35:26.019 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:26.019 19:02:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:26.586 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:26.586 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:26.586 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:26.586 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:26.586 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:26.586 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.586 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.845 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:26.845 "name": "raid_bdev1", 00:35:26.845 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:26.845 "strip_size_kb": 64, 00:35:26.845 "state": "online", 00:35:26.845 "raid_level": "raid5f", 00:35:26.845 "superblock": true, 00:35:26.845 "num_base_bdevs": 4, 00:35:26.845 "num_base_bdevs_discovered": 4, 00:35:26.845 "num_base_bdevs_operational": 4, 00:35:26.845 "base_bdevs_list": [ 00:35:26.845 { 00:35:26.845 "name": "spare", 00:35:26.845 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:26.845 "is_configured": true, 00:35:26.845 "data_offset": 2048, 00:35:26.845 "data_size": 63488 00:35:26.845 }, 00:35:26.845 { 00:35:26.845 "name": "BaseBdev2", 00:35:26.845 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:26.845 "is_configured": true, 00:35:26.845 "data_offset": 2048, 00:35:26.845 "data_size": 63488 00:35:26.845 }, 00:35:26.845 { 00:35:26.845 "name": "BaseBdev3", 00:35:26.845 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:26.845 "is_configured": true, 00:35:26.845 "data_offset": 2048, 00:35:26.845 "data_size": 63488 00:35:26.845 }, 00:35:26.845 { 00:35:26.845 "name": "BaseBdev4", 00:35:26.845 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:26.845 "is_configured": true, 00:35:26.845 "data_offset": 2048, 00:35:26.845 "data_size": 63488 00:35:26.845 } 00:35:26.845 ] 00:35:26.845 }' 00:35:26.845 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:26.845 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:27.103 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:27.103 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:27.103 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.103 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:27.103 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:35:27.103 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:27.361 [2024-07-25 19:02:27.793483] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.361 19:02:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.619 19:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:27.619 "name": "raid_bdev1", 00:35:27.619 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:27.619 "strip_size_kb": 64, 00:35:27.619 "state": "online", 00:35:27.619 "raid_level": "raid5f", 00:35:27.619 "superblock": true, 00:35:27.619 "num_base_bdevs": 4, 00:35:27.619 "num_base_bdevs_discovered": 3, 00:35:27.619 "num_base_bdevs_operational": 3, 00:35:27.619 "base_bdevs_list": [ 00:35:27.619 { 00:35:27.619 "name": null, 00:35:27.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:27.619 "is_configured": false, 00:35:27.619 "data_offset": 2048, 00:35:27.619 "data_size": 63488 00:35:27.619 }, 00:35:27.619 { 00:35:27.619 "name": "BaseBdev2", 00:35:27.619 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:27.619 "is_configured": true, 00:35:27.619 "data_offset": 2048, 00:35:27.619 "data_size": 63488 00:35:27.619 }, 00:35:27.619 { 00:35:27.619 "name": "BaseBdev3", 00:35:27.619 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:27.619 "is_configured": true, 00:35:27.619 "data_offset": 2048, 00:35:27.619 "data_size": 63488 00:35:27.619 }, 00:35:27.619 { 00:35:27.619 "name": "BaseBdev4", 00:35:27.619 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:27.619 "is_configured": true, 00:35:27.619 "data_offset": 2048, 00:35:27.619 "data_size": 63488 00:35:27.619 } 00:35:27.619 ] 00:35:27.619 }' 00:35:27.619 19:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:27.619 19:02:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:28.220 19:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:28.220 [2024-07-25 19:02:28.793685] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:28.220 [2024-07-25 19:02:28.793940] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:28.220 [2024-07-25 19:02:28.793954] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:28.220 [2024-07-25 19:02:28.794021] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:28.478 [2024-07-25 19:02:28.810418] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:35:28.478 [2024-07-25 19:02:28.821158] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:28.478 19:02:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:35:29.416 19:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:29.416 19:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:29.416 19:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:29.416 19:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:29.416 19:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:29.416 19:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:29.416 19:02:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.676 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:29.676 "name": "raid_bdev1", 00:35:29.676 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:29.676 "strip_size_kb": 64, 00:35:29.676 "state": "online", 00:35:29.676 "raid_level": "raid5f", 00:35:29.676 "superblock": true, 00:35:29.676 "num_base_bdevs": 4, 00:35:29.676 "num_base_bdevs_discovered": 4, 00:35:29.676 "num_base_bdevs_operational": 4, 00:35:29.676 "process": { 00:35:29.676 "type": "rebuild", 00:35:29.676 "target": "spare", 00:35:29.676 "progress": { 00:35:29.676 "blocks": 23040, 00:35:29.676 "percent": 12 00:35:29.676 } 00:35:29.676 }, 00:35:29.676 "base_bdevs_list": [ 00:35:29.676 { 00:35:29.676 "name": "spare", 00:35:29.676 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:29.676 "is_configured": true, 00:35:29.676 "data_offset": 2048, 00:35:29.676 "data_size": 63488 00:35:29.676 }, 00:35:29.676 { 00:35:29.676 "name": "BaseBdev2", 00:35:29.676 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:29.676 "is_configured": true, 00:35:29.676 "data_offset": 2048, 00:35:29.676 "data_size": 63488 00:35:29.676 }, 00:35:29.676 { 00:35:29.676 "name": "BaseBdev3", 00:35:29.676 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:29.676 "is_configured": true, 00:35:29.676 "data_offset": 2048, 00:35:29.676 "data_size": 63488 00:35:29.676 }, 00:35:29.676 { 00:35:29.676 "name": "BaseBdev4", 00:35:29.676 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:29.676 "is_configured": true, 00:35:29.676 "data_offset": 2048, 00:35:29.676 "data_size": 63488 00:35:29.676 } 00:35:29.676 ] 00:35:29.676 }' 00:35:29.676 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:29.676 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:29.676 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:29.676 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:29.676 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:29.935 [2024-07-25 19:02:30.402292] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:29.935 [2024-07-25 19:02:30.433040] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:29.935 [2024-07-25 19:02:30.433133] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:29.935 [2024-07-25 19:02:30.433152] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:29.935 [2024-07-25 19:02:30.433160] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.935 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.194 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:30.194 "name": "raid_bdev1", 00:35:30.194 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:30.194 "strip_size_kb": 64, 00:35:30.194 "state": "online", 00:35:30.194 "raid_level": "raid5f", 00:35:30.194 "superblock": true, 00:35:30.194 "num_base_bdevs": 4, 00:35:30.194 "num_base_bdevs_discovered": 3, 00:35:30.194 "num_base_bdevs_operational": 3, 00:35:30.194 "base_bdevs_list": [ 00:35:30.194 { 00:35:30.194 "name": null, 00:35:30.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.194 "is_configured": false, 00:35:30.194 "data_offset": 2048, 00:35:30.194 "data_size": 63488 00:35:30.194 }, 00:35:30.194 { 00:35:30.194 "name": "BaseBdev2", 00:35:30.194 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:30.194 "is_configured": true, 00:35:30.194 "data_offset": 2048, 00:35:30.194 "data_size": 63488 00:35:30.194 }, 00:35:30.194 { 00:35:30.194 "name": "BaseBdev3", 00:35:30.194 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:30.194 "is_configured": true, 00:35:30.194 "data_offset": 2048, 00:35:30.194 "data_size": 63488 00:35:30.194 }, 00:35:30.194 { 00:35:30.194 "name": "BaseBdev4", 00:35:30.194 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:30.194 "is_configured": true, 00:35:30.194 "data_offset": 2048, 00:35:30.194 "data_size": 63488 00:35:30.194 } 00:35:30.194 ] 00:35:30.194 }' 00:35:30.194 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:30.194 19:02:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:30.761 19:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:31.019 [2024-07-25 19:02:31.419900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:31.019 [2024-07-25 19:02:31.420000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:31.019 [2024-07-25 19:02:31.420049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:35:31.019 [2024-07-25 19:02:31.420074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:31.019 [2024-07-25 19:02:31.420661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:31.019 [2024-07-25 19:02:31.420690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:31.019 [2024-07-25 19:02:31.420815] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:31.019 [2024-07-25 19:02:31.420828] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:31.019 [2024-07-25 19:02:31.420838] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:31.019 [2024-07-25 19:02:31.420878] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:31.019 [2024-07-25 19:02:31.437117] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049920 00:35:31.019 spare 00:35:31.020 [2024-07-25 19:02:31.446364] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:31.020 19:02:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:35:31.955 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:31.955 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:31.955 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:31.955 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:31.955 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:31.955 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.955 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.213 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:32.213 "name": "raid_bdev1", 00:35:32.213 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:32.213 "strip_size_kb": 64, 00:35:32.213 "state": "online", 00:35:32.213 "raid_level": "raid5f", 00:35:32.213 "superblock": true, 00:35:32.213 "num_base_bdevs": 4, 00:35:32.213 "num_base_bdevs_discovered": 4, 00:35:32.213 "num_base_bdevs_operational": 4, 00:35:32.213 "process": { 00:35:32.213 "type": "rebuild", 00:35:32.213 "target": "spare", 00:35:32.213 "progress": { 00:35:32.213 "blocks": 23040, 00:35:32.213 "percent": 12 00:35:32.213 } 00:35:32.213 }, 00:35:32.213 "base_bdevs_list": [ 00:35:32.213 { 00:35:32.213 "name": "spare", 00:35:32.213 "uuid": "0b97d7fc-ec52-5378-9869-6c4a812985ee", 00:35:32.213 "is_configured": true, 00:35:32.213 "data_offset": 2048, 00:35:32.213 "data_size": 63488 00:35:32.213 }, 00:35:32.213 { 00:35:32.213 "name": "BaseBdev2", 00:35:32.213 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:32.213 "is_configured": true, 00:35:32.213 "data_offset": 2048, 00:35:32.213 "data_size": 63488 00:35:32.213 }, 00:35:32.213 { 00:35:32.213 "name": "BaseBdev3", 00:35:32.213 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:32.213 "is_configured": true, 00:35:32.213 "data_offset": 2048, 00:35:32.213 "data_size": 63488 00:35:32.213 }, 00:35:32.213 { 00:35:32.213 "name": "BaseBdev4", 00:35:32.213 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:32.213 "is_configured": true, 00:35:32.213 "data_offset": 2048, 00:35:32.213 "data_size": 63488 00:35:32.213 } 00:35:32.213 ] 00:35:32.213 }' 00:35:32.213 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:32.213 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:32.213 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:32.213 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:32.213 19:02:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:32.472 [2024-07-25 19:02:33.019358] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:32.731 [2024-07-25 19:02:33.057975] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:32.731 [2024-07-25 19:02:33.058040] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:32.731 [2024-07-25 19:02:33.058057] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:32.731 [2024-07-25 19:02:33.058074] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.731 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.989 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:32.989 "name": "raid_bdev1", 00:35:32.989 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:32.989 "strip_size_kb": 64, 00:35:32.989 "state": "online", 00:35:32.989 "raid_level": "raid5f", 00:35:32.989 "superblock": true, 00:35:32.989 "num_base_bdevs": 4, 00:35:32.989 "num_base_bdevs_discovered": 3, 00:35:32.989 "num_base_bdevs_operational": 3, 00:35:32.989 "base_bdevs_list": [ 00:35:32.989 { 00:35:32.989 "name": null, 00:35:32.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.989 "is_configured": false, 00:35:32.989 "data_offset": 2048, 00:35:32.989 "data_size": 63488 00:35:32.989 }, 00:35:32.989 { 00:35:32.989 "name": "BaseBdev2", 00:35:32.989 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:32.989 "is_configured": true, 00:35:32.989 "data_offset": 2048, 00:35:32.989 "data_size": 63488 00:35:32.989 }, 00:35:32.989 { 00:35:32.989 "name": "BaseBdev3", 00:35:32.989 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:32.989 "is_configured": true, 00:35:32.989 "data_offset": 2048, 00:35:32.989 "data_size": 63488 00:35:32.989 }, 00:35:32.989 { 00:35:32.989 "name": "BaseBdev4", 00:35:32.989 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:32.989 "is_configured": true, 00:35:32.989 "data_offset": 2048, 00:35:32.990 "data_size": 63488 00:35:32.990 } 00:35:32.990 ] 00:35:32.990 }' 00:35:32.990 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:32.990 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.556 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:33.556 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:33.556 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:33.556 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:33.556 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:33.556 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.556 19:02:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.815 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:33.815 "name": "raid_bdev1", 00:35:33.815 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:33.815 "strip_size_kb": 64, 00:35:33.815 "state": "online", 00:35:33.815 "raid_level": "raid5f", 00:35:33.815 "superblock": true, 00:35:33.815 "num_base_bdevs": 4, 00:35:33.815 "num_base_bdevs_discovered": 3, 00:35:33.815 "num_base_bdevs_operational": 3, 00:35:33.815 "base_bdevs_list": [ 00:35:33.815 { 00:35:33.815 "name": null, 00:35:33.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.815 "is_configured": false, 00:35:33.815 "data_offset": 2048, 00:35:33.815 "data_size": 63488 00:35:33.815 }, 00:35:33.815 { 00:35:33.815 "name": "BaseBdev2", 00:35:33.815 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:33.815 "is_configured": true, 00:35:33.815 "data_offset": 2048, 00:35:33.815 "data_size": 63488 00:35:33.815 }, 00:35:33.815 { 00:35:33.815 "name": "BaseBdev3", 00:35:33.815 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:33.815 "is_configured": true, 00:35:33.815 "data_offset": 2048, 00:35:33.815 "data_size": 63488 00:35:33.815 }, 00:35:33.815 { 00:35:33.815 "name": "BaseBdev4", 00:35:33.815 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:33.815 "is_configured": true, 00:35:33.815 "data_offset": 2048, 00:35:33.815 "data_size": 63488 00:35:33.815 } 00:35:33.815 ] 00:35:33.815 }' 00:35:33.815 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:33.815 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:33.815 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:33.815 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:33.815 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:35:34.074 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:34.332 [2024-07-25 19:02:34.673703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:34.332 [2024-07-25 19:02:34.673823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.332 [2024-07-25 19:02:34.673876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:35:34.332 [2024-07-25 19:02:34.673898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.332 [2024-07-25 19:02:34.674493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.332 [2024-07-25 19:02:34.674535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:34.332 [2024-07-25 19:02:34.674680] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:34.332 [2024-07-25 19:02:34.674695] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:34.332 [2024-07-25 19:02:34.674703] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:34.332 BaseBdev1 00:35:34.332 19:02:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.266 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.524 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:35.524 "name": "raid_bdev1", 00:35:35.524 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:35.524 "strip_size_kb": 64, 00:35:35.524 "state": "online", 00:35:35.524 "raid_level": "raid5f", 00:35:35.524 "superblock": true, 00:35:35.524 "num_base_bdevs": 4, 00:35:35.524 "num_base_bdevs_discovered": 3, 00:35:35.524 "num_base_bdevs_operational": 3, 00:35:35.524 "base_bdevs_list": [ 00:35:35.524 { 00:35:35.524 "name": null, 00:35:35.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.524 "is_configured": false, 00:35:35.524 "data_offset": 2048, 00:35:35.524 "data_size": 63488 00:35:35.524 }, 00:35:35.524 { 00:35:35.524 "name": "BaseBdev2", 00:35:35.524 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:35.524 "is_configured": true, 00:35:35.524 "data_offset": 2048, 00:35:35.524 "data_size": 63488 00:35:35.524 }, 00:35:35.524 { 00:35:35.524 "name": "BaseBdev3", 00:35:35.524 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:35.524 "is_configured": true, 00:35:35.524 "data_offset": 2048, 00:35:35.524 "data_size": 63488 00:35:35.524 }, 00:35:35.524 { 00:35:35.524 "name": "BaseBdev4", 00:35:35.524 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:35.524 "is_configured": true, 00:35:35.524 "data_offset": 2048, 00:35:35.524 "data_size": 63488 00:35:35.524 } 00:35:35.524 ] 00:35:35.524 }' 00:35:35.524 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:35.524 19:02:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.092 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:36.092 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:36.092 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:36.092 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:36.092 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:36.092 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.092 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:36.351 "name": "raid_bdev1", 00:35:36.351 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:36.351 "strip_size_kb": 64, 00:35:36.351 "state": "online", 00:35:36.351 "raid_level": "raid5f", 00:35:36.351 "superblock": true, 00:35:36.351 "num_base_bdevs": 4, 00:35:36.351 "num_base_bdevs_discovered": 3, 00:35:36.351 "num_base_bdevs_operational": 3, 00:35:36.351 "base_bdevs_list": [ 00:35:36.351 { 00:35:36.351 "name": null, 00:35:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.351 "is_configured": false, 00:35:36.351 "data_offset": 2048, 00:35:36.351 "data_size": 63488 00:35:36.351 }, 00:35:36.351 { 00:35:36.351 "name": "BaseBdev2", 00:35:36.351 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:36.351 "is_configured": true, 00:35:36.351 "data_offset": 2048, 00:35:36.351 "data_size": 63488 00:35:36.351 }, 00:35:36.351 { 00:35:36.351 "name": "BaseBdev3", 00:35:36.351 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:36.351 "is_configured": true, 00:35:36.351 "data_offset": 2048, 00:35:36.351 "data_size": 63488 00:35:36.351 }, 00:35:36.351 { 00:35:36.351 "name": "BaseBdev4", 00:35:36.351 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:36.351 "is_configured": true, 00:35:36.351 "data_offset": 2048, 00:35:36.351 "data_size": 63488 00:35:36.351 } 00:35:36.351 ] 00:35:36.351 }' 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:36.351 19:02:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:36.610 [2024-07-25 19:02:37.012171] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:36.610 [2024-07-25 19:02:37.012372] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:36.610 [2024-07-25 19:02:37.012386] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:36.610 request: 00:35:36.610 { 00:35:36.610 "base_bdev": "BaseBdev1", 00:35:36.610 "raid_bdev": "raid_bdev1", 00:35:36.610 "method": "bdev_raid_add_base_bdev", 00:35:36.610 "req_id": 1 00:35:36.610 } 00:35:36.610 Got JSON-RPC error response 00:35:36.610 response: 00:35:36.610 { 00:35:36.610 "code": -22, 00:35:36.610 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:36.610 } 00:35:36.610 19:02:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:35:36.610 19:02:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:36.610 19:02:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:36.610 19:02:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:36.610 19:02:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:37.546 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:37.805 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:37.805 "name": "raid_bdev1", 00:35:37.805 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:37.805 "strip_size_kb": 64, 00:35:37.805 "state": "online", 00:35:37.805 "raid_level": "raid5f", 00:35:37.805 "superblock": true, 00:35:37.805 "num_base_bdevs": 4, 00:35:37.805 "num_base_bdevs_discovered": 3, 00:35:37.805 "num_base_bdevs_operational": 3, 00:35:37.805 "base_bdevs_list": [ 00:35:37.805 { 00:35:37.805 "name": null, 00:35:37.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.805 "is_configured": false, 00:35:37.805 "data_offset": 2048, 00:35:37.805 "data_size": 63488 00:35:37.805 }, 00:35:37.805 { 00:35:37.805 "name": "BaseBdev2", 00:35:37.805 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:37.805 "is_configured": true, 00:35:37.805 "data_offset": 2048, 00:35:37.805 "data_size": 63488 00:35:37.805 }, 00:35:37.805 { 00:35:37.805 "name": "BaseBdev3", 00:35:37.805 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:37.805 "is_configured": true, 00:35:37.805 "data_offset": 2048, 00:35:37.805 "data_size": 63488 00:35:37.805 }, 00:35:37.805 { 00:35:37.805 "name": "BaseBdev4", 00:35:37.805 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:37.805 "is_configured": true, 00:35:37.805 "data_offset": 2048, 00:35:37.805 "data_size": 63488 00:35:37.805 } 00:35:37.805 ] 00:35:37.805 }' 00:35:37.805 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:37.805 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:38.372 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:38.372 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:38.372 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:38.372 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:38.372 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:38.372 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.372 19:02:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:38.631 "name": "raid_bdev1", 00:35:38.631 "uuid": "e22b3af3-5cd7-43e3-87e9-5f2b127bfcc1", 00:35:38.631 "strip_size_kb": 64, 00:35:38.631 "state": "online", 00:35:38.631 "raid_level": "raid5f", 00:35:38.631 "superblock": true, 00:35:38.631 "num_base_bdevs": 4, 00:35:38.631 "num_base_bdevs_discovered": 3, 00:35:38.631 "num_base_bdevs_operational": 3, 00:35:38.631 "base_bdevs_list": [ 00:35:38.631 { 00:35:38.631 "name": null, 00:35:38.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.631 "is_configured": false, 00:35:38.631 "data_offset": 2048, 00:35:38.631 "data_size": 63488 00:35:38.631 }, 00:35:38.631 { 00:35:38.631 "name": "BaseBdev2", 00:35:38.631 "uuid": "6dee923b-d548-5e83-8988-46b35ea60f79", 00:35:38.631 "is_configured": true, 00:35:38.631 "data_offset": 2048, 00:35:38.631 "data_size": 63488 00:35:38.631 }, 00:35:38.631 { 00:35:38.631 "name": "BaseBdev3", 00:35:38.631 "uuid": "aa8c1acd-ba04-5501-b185-8de71d8d01d2", 00:35:38.631 "is_configured": true, 00:35:38.631 "data_offset": 2048, 00:35:38.631 "data_size": 63488 00:35:38.631 }, 00:35:38.631 { 00:35:38.631 "name": "BaseBdev4", 00:35:38.631 "uuid": "6ec3fc2d-111b-5405-87a9-be64ac149234", 00:35:38.631 "is_configured": true, 00:35:38.631 "data_offset": 2048, 00:35:38.631 "data_size": 63488 00:35:38.631 } 00:35:38.631 ] 00:35:38.631 }' 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 156836 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 156836 ']' 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 156836 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 156836 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 156836' 00:35:38.631 killing process with pid 156836 00:35:38.631 Received shutdown signal, test time was about 60.000000 seconds 00:35:38.631 00:35:38.631 Latency(us) 00:35:38.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.631 =================================================================================================================== 00:35:38.631 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 156836 00:35:38.631 [2024-07-25 19:02:39.161491] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:38.631 19:02:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 156836 00:35:38.631 [2024-07-25 19:02:39.161622] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:38.631 [2024-07-25 19:02:39.161717] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:38.631 [2024-07-25 19:02:39.161729] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:35:39.197 [2024-07-25 19:02:39.696684] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:40.573 ************************************ 00:35:40.573 END TEST raid5f_rebuild_test_sb 00:35:40.573 ************************************ 00:35:40.573 19:02:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:35:40.573 00:35:40.573 real 0m39.080s 00:35:40.573 user 0m57.440s 00:35:40.573 sys 0m5.335s 00:35:40.573 19:02:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:40.573 19:02:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:40.832 19:02:41 bdev_raid -- bdev/bdev_raid.sh@976 -- # base_blocklen=4096 00:35:40.832 19:02:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:35:40.832 19:02:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:35:40.832 19:02:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:40.832 19:02:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:40.832 ************************************ 00:35:40.832 START TEST raid_state_function_test_sb_4k 00:35:40.832 ************************************ 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=157842 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 157842' 00:35:40.832 Process raid pid: 157842 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 157842 /var/tmp/spdk-raid.sock 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 157842 ']' 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:40.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:40.832 19:02:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:40.832 [2024-07-25 19:02:41.312037] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:40.832 [2024-07-25 19:02:41.312207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:41.091 [2024-07-25 19:02:41.474166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.349 [2024-07-25 19:02:41.683289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.349 [2024-07-25 19:02:41.878470] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:41.916 [2024-07-25 19:02:42.464443] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:41.916 [2024-07-25 19:02:42.464528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:41.916 [2024-07-25 19:02:42.464539] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:41.916 [2024-07-25 19:02:42.464567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:41.916 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.175 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:42.175 "name": "Existed_Raid", 00:35:42.175 "uuid": "f1478bbd-1166-4cb0-9fe6-2f1e606688ee", 00:35:42.175 "strip_size_kb": 0, 00:35:42.175 "state": "configuring", 00:35:42.175 "raid_level": "raid1", 00:35:42.175 "superblock": true, 00:35:42.175 "num_base_bdevs": 2, 00:35:42.175 "num_base_bdevs_discovered": 0, 00:35:42.175 "num_base_bdevs_operational": 2, 00:35:42.175 "base_bdevs_list": [ 00:35:42.175 { 00:35:42.175 "name": "BaseBdev1", 00:35:42.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:42.175 "is_configured": false, 00:35:42.175 "data_offset": 0, 00:35:42.175 "data_size": 0 00:35:42.175 }, 00:35:42.175 { 00:35:42.175 "name": "BaseBdev2", 00:35:42.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:42.175 "is_configured": false, 00:35:42.175 "data_offset": 0, 00:35:42.175 "data_size": 0 00:35:42.175 } 00:35:42.175 ] 00:35:42.175 }' 00:35:42.175 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:42.175 19:02:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:42.743 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:43.005 [2024-07-25 19:02:43.452483] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:43.005 [2024-07-25 19:02:43.452526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:35:43.005 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:43.263 [2024-07-25 19:02:43.692540] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:43.264 [2024-07-25 19:02:43.692592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:43.264 [2024-07-25 19:02:43.692600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:43.264 [2024-07-25 19:02:43.692622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:43.264 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:35:43.521 [2024-07-25 19:02:43.899589] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:43.521 BaseBdev1 00:35:43.521 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:35:43.522 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:35:43.522 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:43.522 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:35:43.522 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:43.522 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:43.522 19:02:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:43.780 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:44.039 [ 00:35:44.039 { 00:35:44.039 "name": "BaseBdev1", 00:35:44.039 "aliases": [ 00:35:44.039 "6182b81b-ccd1-4a83-a589-de894c09b038" 00:35:44.039 ], 00:35:44.039 "product_name": "Malloc disk", 00:35:44.039 "block_size": 4096, 00:35:44.039 "num_blocks": 8192, 00:35:44.039 "uuid": "6182b81b-ccd1-4a83-a589-de894c09b038", 00:35:44.039 "assigned_rate_limits": { 00:35:44.039 "rw_ios_per_sec": 0, 00:35:44.039 "rw_mbytes_per_sec": 0, 00:35:44.039 "r_mbytes_per_sec": 0, 00:35:44.039 "w_mbytes_per_sec": 0 00:35:44.039 }, 00:35:44.039 "claimed": true, 00:35:44.039 "claim_type": "exclusive_write", 00:35:44.039 "zoned": false, 00:35:44.039 "supported_io_types": { 00:35:44.039 "read": true, 00:35:44.039 "write": true, 00:35:44.039 "unmap": true, 00:35:44.039 "flush": true, 00:35:44.039 "reset": true, 00:35:44.039 "nvme_admin": false, 00:35:44.039 "nvme_io": false, 00:35:44.039 "nvme_io_md": false, 00:35:44.039 "write_zeroes": true, 00:35:44.039 "zcopy": true, 00:35:44.039 "get_zone_info": false, 00:35:44.039 "zone_management": false, 00:35:44.039 "zone_append": false, 00:35:44.039 "compare": false, 00:35:44.039 "compare_and_write": false, 00:35:44.039 "abort": true, 00:35:44.039 "seek_hole": false, 00:35:44.039 "seek_data": false, 00:35:44.039 "copy": true, 00:35:44.039 "nvme_iov_md": false 00:35:44.039 }, 00:35:44.039 "memory_domains": [ 00:35:44.039 { 00:35:44.039 "dma_device_id": "system", 00:35:44.039 "dma_device_type": 1 00:35:44.039 }, 00:35:44.039 { 00:35:44.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:44.039 "dma_device_type": 2 00:35:44.039 } 00:35:44.039 ], 00:35:44.039 "driver_specific": {} 00:35:44.039 } 00:35:44.039 ] 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:44.039 "name": "Existed_Raid", 00:35:44.039 "uuid": "d2bdd4f2-e3e2-47b4-94d0-6d5d0a1ff11d", 00:35:44.039 "strip_size_kb": 0, 00:35:44.039 "state": "configuring", 00:35:44.039 "raid_level": "raid1", 00:35:44.039 "superblock": true, 00:35:44.039 "num_base_bdevs": 2, 00:35:44.039 "num_base_bdevs_discovered": 1, 00:35:44.039 "num_base_bdevs_operational": 2, 00:35:44.039 "base_bdevs_list": [ 00:35:44.039 { 00:35:44.039 "name": "BaseBdev1", 00:35:44.039 "uuid": "6182b81b-ccd1-4a83-a589-de894c09b038", 00:35:44.039 "is_configured": true, 00:35:44.039 "data_offset": 256, 00:35:44.039 "data_size": 7936 00:35:44.039 }, 00:35:44.039 { 00:35:44.039 "name": "BaseBdev2", 00:35:44.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.039 "is_configured": false, 00:35:44.039 "data_offset": 0, 00:35:44.039 "data_size": 0 00:35:44.039 } 00:35:44.039 ] 00:35:44.039 }' 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:44.039 19:02:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:44.606 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:44.864 [2024-07-25 19:02:45.379847] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:44.864 [2024-07-25 19:02:45.379916] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:35:44.864 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:45.123 [2024-07-25 19:02:45.631915] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:45.123 [2024-07-25 19:02:45.634144] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:45.123 [2024-07-25 19:02:45.634195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:45.123 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:45.382 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:45.382 "name": "Existed_Raid", 00:35:45.382 "uuid": "5780e6d3-93e6-4dc1-b644-6049af9c97f8", 00:35:45.382 "strip_size_kb": 0, 00:35:45.382 "state": "configuring", 00:35:45.382 "raid_level": "raid1", 00:35:45.382 "superblock": true, 00:35:45.382 "num_base_bdevs": 2, 00:35:45.382 "num_base_bdevs_discovered": 1, 00:35:45.382 "num_base_bdevs_operational": 2, 00:35:45.382 "base_bdevs_list": [ 00:35:45.382 { 00:35:45.382 "name": "BaseBdev1", 00:35:45.382 "uuid": "6182b81b-ccd1-4a83-a589-de894c09b038", 00:35:45.382 "is_configured": true, 00:35:45.382 "data_offset": 256, 00:35:45.382 "data_size": 7936 00:35:45.382 }, 00:35:45.382 { 00:35:45.382 "name": "BaseBdev2", 00:35:45.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.382 "is_configured": false, 00:35:45.382 "data_offset": 0, 00:35:45.382 "data_size": 0 00:35:45.382 } 00:35:45.382 ] 00:35:45.382 }' 00:35:45.382 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:45.383 19:02:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:45.951 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:35:46.210 [2024-07-25 19:02:46.620323] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:46.210 [2024-07-25 19:02:46.620589] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:35:46.210 [2024-07-25 19:02:46.620602] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:46.210 [2024-07-25 19:02:46.620742] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:35:46.210 [2024-07-25 19:02:46.621070] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:35:46.210 [2024-07-25 19:02:46.621089] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:35:46.210 [2024-07-25 19:02:46.621221] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:46.210 BaseBdev2 00:35:46.210 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:35:46.210 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:35:46.210 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:46.210 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:35:46.210 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:46.210 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:46.210 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:46.469 [ 00:35:46.469 { 00:35:46.469 "name": "BaseBdev2", 00:35:46.469 "aliases": [ 00:35:46.469 "79845221-1574-4b29-b986-cf20328cb722" 00:35:46.469 ], 00:35:46.469 "product_name": "Malloc disk", 00:35:46.469 "block_size": 4096, 00:35:46.469 "num_blocks": 8192, 00:35:46.469 "uuid": "79845221-1574-4b29-b986-cf20328cb722", 00:35:46.469 "assigned_rate_limits": { 00:35:46.469 "rw_ios_per_sec": 0, 00:35:46.469 "rw_mbytes_per_sec": 0, 00:35:46.469 "r_mbytes_per_sec": 0, 00:35:46.469 "w_mbytes_per_sec": 0 00:35:46.469 }, 00:35:46.469 "claimed": true, 00:35:46.469 "claim_type": "exclusive_write", 00:35:46.469 "zoned": false, 00:35:46.469 "supported_io_types": { 00:35:46.469 "read": true, 00:35:46.469 "write": true, 00:35:46.469 "unmap": true, 00:35:46.469 "flush": true, 00:35:46.469 "reset": true, 00:35:46.469 "nvme_admin": false, 00:35:46.469 "nvme_io": false, 00:35:46.469 "nvme_io_md": false, 00:35:46.469 "write_zeroes": true, 00:35:46.469 "zcopy": true, 00:35:46.469 "get_zone_info": false, 00:35:46.469 "zone_management": false, 00:35:46.469 "zone_append": false, 00:35:46.469 "compare": false, 00:35:46.469 "compare_and_write": false, 00:35:46.469 "abort": true, 00:35:46.469 "seek_hole": false, 00:35:46.469 "seek_data": false, 00:35:46.469 "copy": true, 00:35:46.469 "nvme_iov_md": false 00:35:46.469 }, 00:35:46.469 "memory_domains": [ 00:35:46.469 { 00:35:46.469 "dma_device_id": "system", 00:35:46.469 "dma_device_type": 1 00:35:46.469 }, 00:35:46.469 { 00:35:46.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:46.469 "dma_device_type": 2 00:35:46.469 } 00:35:46.469 ], 00:35:46.469 "driver_specific": {} 00:35:46.469 } 00:35:46.469 ] 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.469 19:02:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:46.728 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:46.728 "name": "Existed_Raid", 00:35:46.728 "uuid": "5780e6d3-93e6-4dc1-b644-6049af9c97f8", 00:35:46.728 "strip_size_kb": 0, 00:35:46.728 "state": "online", 00:35:46.728 "raid_level": "raid1", 00:35:46.728 "superblock": true, 00:35:46.728 "num_base_bdevs": 2, 00:35:46.728 "num_base_bdevs_discovered": 2, 00:35:46.728 "num_base_bdevs_operational": 2, 00:35:46.728 "base_bdevs_list": [ 00:35:46.728 { 00:35:46.728 "name": "BaseBdev1", 00:35:46.728 "uuid": "6182b81b-ccd1-4a83-a589-de894c09b038", 00:35:46.728 "is_configured": true, 00:35:46.728 "data_offset": 256, 00:35:46.728 "data_size": 7936 00:35:46.728 }, 00:35:46.728 { 00:35:46.728 "name": "BaseBdev2", 00:35:46.728 "uuid": "79845221-1574-4b29-b986-cf20328cb722", 00:35:46.728 "is_configured": true, 00:35:46.728 "data_offset": 256, 00:35:46.728 "data_size": 7936 00:35:46.728 } 00:35:46.728 ] 00:35:46.728 }' 00:35:46.728 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:46.728 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:47.297 19:02:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:47.556 [2024-07-25 19:02:47.992760] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:47.556 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:47.556 "name": "Existed_Raid", 00:35:47.556 "aliases": [ 00:35:47.556 "5780e6d3-93e6-4dc1-b644-6049af9c97f8" 00:35:47.556 ], 00:35:47.556 "product_name": "Raid Volume", 00:35:47.556 "block_size": 4096, 00:35:47.556 "num_blocks": 7936, 00:35:47.556 "uuid": "5780e6d3-93e6-4dc1-b644-6049af9c97f8", 00:35:47.556 "assigned_rate_limits": { 00:35:47.556 "rw_ios_per_sec": 0, 00:35:47.556 "rw_mbytes_per_sec": 0, 00:35:47.556 "r_mbytes_per_sec": 0, 00:35:47.556 "w_mbytes_per_sec": 0 00:35:47.556 }, 00:35:47.556 "claimed": false, 00:35:47.556 "zoned": false, 00:35:47.556 "supported_io_types": { 00:35:47.556 "read": true, 00:35:47.556 "write": true, 00:35:47.556 "unmap": false, 00:35:47.556 "flush": false, 00:35:47.557 "reset": true, 00:35:47.557 "nvme_admin": false, 00:35:47.557 "nvme_io": false, 00:35:47.557 "nvme_io_md": false, 00:35:47.557 "write_zeroes": true, 00:35:47.557 "zcopy": false, 00:35:47.557 "get_zone_info": false, 00:35:47.557 "zone_management": false, 00:35:47.557 "zone_append": false, 00:35:47.557 "compare": false, 00:35:47.557 "compare_and_write": false, 00:35:47.557 "abort": false, 00:35:47.557 "seek_hole": false, 00:35:47.557 "seek_data": false, 00:35:47.557 "copy": false, 00:35:47.557 "nvme_iov_md": false 00:35:47.557 }, 00:35:47.557 "memory_domains": [ 00:35:47.557 { 00:35:47.557 "dma_device_id": "system", 00:35:47.557 "dma_device_type": 1 00:35:47.557 }, 00:35:47.557 { 00:35:47.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.557 "dma_device_type": 2 00:35:47.557 }, 00:35:47.557 { 00:35:47.557 "dma_device_id": "system", 00:35:47.557 "dma_device_type": 1 00:35:47.557 }, 00:35:47.557 { 00:35:47.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.557 "dma_device_type": 2 00:35:47.557 } 00:35:47.557 ], 00:35:47.557 "driver_specific": { 00:35:47.557 "raid": { 00:35:47.557 "uuid": "5780e6d3-93e6-4dc1-b644-6049af9c97f8", 00:35:47.557 "strip_size_kb": 0, 00:35:47.557 "state": "online", 00:35:47.557 "raid_level": "raid1", 00:35:47.557 "superblock": true, 00:35:47.557 "num_base_bdevs": 2, 00:35:47.557 "num_base_bdevs_discovered": 2, 00:35:47.557 "num_base_bdevs_operational": 2, 00:35:47.557 "base_bdevs_list": [ 00:35:47.557 { 00:35:47.557 "name": "BaseBdev1", 00:35:47.557 "uuid": "6182b81b-ccd1-4a83-a589-de894c09b038", 00:35:47.557 "is_configured": true, 00:35:47.557 "data_offset": 256, 00:35:47.557 "data_size": 7936 00:35:47.557 }, 00:35:47.557 { 00:35:47.557 "name": "BaseBdev2", 00:35:47.557 "uuid": "79845221-1574-4b29-b986-cf20328cb722", 00:35:47.557 "is_configured": true, 00:35:47.557 "data_offset": 256, 00:35:47.557 "data_size": 7936 00:35:47.557 } 00:35:47.557 ] 00:35:47.557 } 00:35:47.557 } 00:35:47.557 }' 00:35:47.557 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:47.557 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:35:47.557 BaseBdev2' 00:35:47.557 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:47.557 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:35:47.557 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:47.816 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:47.816 "name": "BaseBdev1", 00:35:47.816 "aliases": [ 00:35:47.816 "6182b81b-ccd1-4a83-a589-de894c09b038" 00:35:47.816 ], 00:35:47.816 "product_name": "Malloc disk", 00:35:47.816 "block_size": 4096, 00:35:47.816 "num_blocks": 8192, 00:35:47.816 "uuid": "6182b81b-ccd1-4a83-a589-de894c09b038", 00:35:47.816 "assigned_rate_limits": { 00:35:47.816 "rw_ios_per_sec": 0, 00:35:47.816 "rw_mbytes_per_sec": 0, 00:35:47.816 "r_mbytes_per_sec": 0, 00:35:47.816 "w_mbytes_per_sec": 0 00:35:47.816 }, 00:35:47.816 "claimed": true, 00:35:47.816 "claim_type": "exclusive_write", 00:35:47.816 "zoned": false, 00:35:47.816 "supported_io_types": { 00:35:47.816 "read": true, 00:35:47.816 "write": true, 00:35:47.816 "unmap": true, 00:35:47.816 "flush": true, 00:35:47.816 "reset": true, 00:35:47.816 "nvme_admin": false, 00:35:47.816 "nvme_io": false, 00:35:47.816 "nvme_io_md": false, 00:35:47.816 "write_zeroes": true, 00:35:47.816 "zcopy": true, 00:35:47.816 "get_zone_info": false, 00:35:47.816 "zone_management": false, 00:35:47.816 "zone_append": false, 00:35:47.816 "compare": false, 00:35:47.816 "compare_and_write": false, 00:35:47.816 "abort": true, 00:35:47.816 "seek_hole": false, 00:35:47.816 "seek_data": false, 00:35:47.816 "copy": true, 00:35:47.816 "nvme_iov_md": false 00:35:47.816 }, 00:35:47.816 "memory_domains": [ 00:35:47.816 { 00:35:47.816 "dma_device_id": "system", 00:35:47.816 "dma_device_type": 1 00:35:47.816 }, 00:35:47.816 { 00:35:47.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.816 "dma_device_type": 2 00:35:47.816 } 00:35:47.816 ], 00:35:47.816 "driver_specific": {} 00:35:47.816 }' 00:35:47.816 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:47.816 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:47.816 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:47.816 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:47.816 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:48.075 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:48.334 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:48.334 "name": "BaseBdev2", 00:35:48.334 "aliases": [ 00:35:48.334 "79845221-1574-4b29-b986-cf20328cb722" 00:35:48.334 ], 00:35:48.334 "product_name": "Malloc disk", 00:35:48.334 "block_size": 4096, 00:35:48.334 "num_blocks": 8192, 00:35:48.334 "uuid": "79845221-1574-4b29-b986-cf20328cb722", 00:35:48.334 "assigned_rate_limits": { 00:35:48.334 "rw_ios_per_sec": 0, 00:35:48.334 "rw_mbytes_per_sec": 0, 00:35:48.334 "r_mbytes_per_sec": 0, 00:35:48.334 "w_mbytes_per_sec": 0 00:35:48.334 }, 00:35:48.334 "claimed": true, 00:35:48.334 "claim_type": "exclusive_write", 00:35:48.334 "zoned": false, 00:35:48.334 "supported_io_types": { 00:35:48.334 "read": true, 00:35:48.334 "write": true, 00:35:48.334 "unmap": true, 00:35:48.334 "flush": true, 00:35:48.334 "reset": true, 00:35:48.334 "nvme_admin": false, 00:35:48.334 "nvme_io": false, 00:35:48.334 "nvme_io_md": false, 00:35:48.334 "write_zeroes": true, 00:35:48.334 "zcopy": true, 00:35:48.334 "get_zone_info": false, 00:35:48.334 "zone_management": false, 00:35:48.334 "zone_append": false, 00:35:48.334 "compare": false, 00:35:48.334 "compare_and_write": false, 00:35:48.334 "abort": true, 00:35:48.334 "seek_hole": false, 00:35:48.334 "seek_data": false, 00:35:48.334 "copy": true, 00:35:48.334 "nvme_iov_md": false 00:35:48.334 }, 00:35:48.334 "memory_domains": [ 00:35:48.334 { 00:35:48.334 "dma_device_id": "system", 00:35:48.334 "dma_device_type": 1 00:35:48.334 }, 00:35:48.334 { 00:35:48.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:48.334 "dma_device_type": 2 00:35:48.334 } 00:35:48.334 ], 00:35:48.334 "driver_specific": {} 00:35:48.334 }' 00:35:48.334 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:48.334 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:48.334 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:48.334 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:48.593 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:48.593 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:48.593 19:02:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:48.593 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:48.593 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:48.593 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:48.593 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:48.593 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:48.593 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:48.851 [2024-07-25 19:02:49.396822] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.111 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:49.370 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:49.370 "name": "Existed_Raid", 00:35:49.370 "uuid": "5780e6d3-93e6-4dc1-b644-6049af9c97f8", 00:35:49.370 "strip_size_kb": 0, 00:35:49.370 "state": "online", 00:35:49.370 "raid_level": "raid1", 00:35:49.370 "superblock": true, 00:35:49.370 "num_base_bdevs": 2, 00:35:49.370 "num_base_bdevs_discovered": 1, 00:35:49.370 "num_base_bdevs_operational": 1, 00:35:49.370 "base_bdevs_list": [ 00:35:49.370 { 00:35:49.370 "name": null, 00:35:49.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.370 "is_configured": false, 00:35:49.370 "data_offset": 256, 00:35:49.370 "data_size": 7936 00:35:49.370 }, 00:35:49.370 { 00:35:49.370 "name": "BaseBdev2", 00:35:49.370 "uuid": "79845221-1574-4b29-b986-cf20328cb722", 00:35:49.370 "is_configured": true, 00:35:49.370 "data_offset": 256, 00:35:49.370 "data_size": 7936 00:35:49.370 } 00:35:49.370 ] 00:35:49.370 }' 00:35:49.370 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:49.371 19:02:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:49.939 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:35:49.939 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:49.939 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.939 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:50.201 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:50.201 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:50.201 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:50.503 [2024-07-25 19:02:50.821572] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:50.503 [2024-07-25 19:02:50.821693] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:50.503 [2024-07-25 19:02:50.908091] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:50.503 [2024-07-25 19:02:50.908305] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:50.503 [2024-07-25 19:02:50.908427] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:35:50.503 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:50.503 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:50.503 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.503 19:02:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 157842 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 157842 ']' 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 157842 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:35:50.776 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:50.777 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 157842 00:35:50.777 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:50.777 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:50.777 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 157842' 00:35:50.777 killing process with pid 157842 00:35:50.777 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 157842 00:35:50.777 [2024-07-25 19:02:51.227646] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:50.777 19:02:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 157842 00:35:50.777 [2024-07-25 19:02:51.227919] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:52.154 19:02:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:35:52.154 00:35:52.154 real 0m11.183s 00:35:52.154 user 0m18.868s 00:35:52.155 sys 0m1.955s 00:35:52.155 19:02:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:52.155 19:02:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:52.155 ************************************ 00:35:52.155 END TEST raid_state_function_test_sb_4k 00:35:52.155 ************************************ 00:35:52.155 19:02:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:35:52.155 19:02:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:35:52.155 19:02:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:52.155 19:02:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:52.155 ************************************ 00:35:52.155 START TEST raid_superblock_test_4k 00:35:52.155 ************************************ 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=158211 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 158211 /var/tmp/spdk-raid.sock 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 158211 ']' 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:52.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:52.155 19:02:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:52.155 [2024-07-25 19:02:52.592882] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:52.155 [2024-07-25 19:02:52.593350] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158211 ] 00:35:52.413 [2024-07-25 19:02:52.780438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.413 [2024-07-25 19:02:52.987074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.673 [2024-07-25 19:02:53.176697] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:52.939 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:35:53.199 malloc1 00:35:53.199 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:53.457 [2024-07-25 19:02:53.912835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:53.457 [2024-07-25 19:02:53.913533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:53.457 [2024-07-25 19:02:53.913846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:35:53.457 [2024-07-25 19:02:53.914072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:53.457 [2024-07-25 19:02:53.916899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:53.457 [2024-07-25 19:02:53.917142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:53.457 pt1 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:53.457 19:02:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:35:53.715 malloc2 00:35:53.715 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:53.972 [2024-07-25 19:02:54.335741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:53.972 [2024-07-25 19:02:54.336121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:53.972 [2024-07-25 19:02:54.336358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:53.972 [2024-07-25 19:02:54.336590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:53.972 [2024-07-25 19:02:54.339188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:53.972 [2024-07-25 19:02:54.339435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:53.972 pt2 00:35:53.972 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:35:53.972 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:35:53.972 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:35:54.231 [2024-07-25 19:02:54.579837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:54.231 [2024-07-25 19:02:54.582163] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:54.231 [2024-07-25 19:02:54.582458] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:35:54.231 [2024-07-25 19:02:54.582586] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:54.231 [2024-07-25 19:02:54.582749] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:35:54.231 [2024-07-25 19:02:54.583275] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:35:54.231 [2024-07-25 19:02:54.583388] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:35:54.231 [2024-07-25 19:02:54.583624] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:54.231 "name": "raid_bdev1", 00:35:54.231 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:35:54.231 "strip_size_kb": 0, 00:35:54.231 "state": "online", 00:35:54.231 "raid_level": "raid1", 00:35:54.231 "superblock": true, 00:35:54.231 "num_base_bdevs": 2, 00:35:54.231 "num_base_bdevs_discovered": 2, 00:35:54.231 "num_base_bdevs_operational": 2, 00:35:54.231 "base_bdevs_list": [ 00:35:54.231 { 00:35:54.231 "name": "pt1", 00:35:54.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:54.231 "is_configured": true, 00:35:54.231 "data_offset": 256, 00:35:54.231 "data_size": 7936 00:35:54.231 }, 00:35:54.231 { 00:35:54.231 "name": "pt2", 00:35:54.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:54.231 "is_configured": true, 00:35:54.231 "data_offset": 256, 00:35:54.231 "data_size": 7936 00:35:54.231 } 00:35:54.231 ] 00:35:54.231 }' 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:54.231 19:02:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:54.798 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:35:54.798 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:54.799 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:54.799 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:54.799 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:54.799 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:54.799 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:54.799 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:55.058 [2024-07-25 19:02:55.464194] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:55.058 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:55.058 "name": "raid_bdev1", 00:35:55.058 "aliases": [ 00:35:55.058 "c7ab71ce-184b-4972-b4e8-a383d9e5897e" 00:35:55.058 ], 00:35:55.058 "product_name": "Raid Volume", 00:35:55.058 "block_size": 4096, 00:35:55.058 "num_blocks": 7936, 00:35:55.058 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:35:55.058 "assigned_rate_limits": { 00:35:55.058 "rw_ios_per_sec": 0, 00:35:55.058 "rw_mbytes_per_sec": 0, 00:35:55.058 "r_mbytes_per_sec": 0, 00:35:55.058 "w_mbytes_per_sec": 0 00:35:55.058 }, 00:35:55.058 "claimed": false, 00:35:55.058 "zoned": false, 00:35:55.058 "supported_io_types": { 00:35:55.058 "read": true, 00:35:55.058 "write": true, 00:35:55.058 "unmap": false, 00:35:55.058 "flush": false, 00:35:55.058 "reset": true, 00:35:55.058 "nvme_admin": false, 00:35:55.058 "nvme_io": false, 00:35:55.058 "nvme_io_md": false, 00:35:55.058 "write_zeroes": true, 00:35:55.058 "zcopy": false, 00:35:55.058 "get_zone_info": false, 00:35:55.058 "zone_management": false, 00:35:55.058 "zone_append": false, 00:35:55.058 "compare": false, 00:35:55.058 "compare_and_write": false, 00:35:55.058 "abort": false, 00:35:55.058 "seek_hole": false, 00:35:55.058 "seek_data": false, 00:35:55.058 "copy": false, 00:35:55.058 "nvme_iov_md": false 00:35:55.058 }, 00:35:55.058 "memory_domains": [ 00:35:55.058 { 00:35:55.058 "dma_device_id": "system", 00:35:55.058 "dma_device_type": 1 00:35:55.058 }, 00:35:55.058 { 00:35:55.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.058 "dma_device_type": 2 00:35:55.058 }, 00:35:55.058 { 00:35:55.058 "dma_device_id": "system", 00:35:55.058 "dma_device_type": 1 00:35:55.058 }, 00:35:55.058 { 00:35:55.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.058 "dma_device_type": 2 00:35:55.058 } 00:35:55.058 ], 00:35:55.058 "driver_specific": { 00:35:55.058 "raid": { 00:35:55.058 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:35:55.058 "strip_size_kb": 0, 00:35:55.058 "state": "online", 00:35:55.058 "raid_level": "raid1", 00:35:55.058 "superblock": true, 00:35:55.058 "num_base_bdevs": 2, 00:35:55.058 "num_base_bdevs_discovered": 2, 00:35:55.058 "num_base_bdevs_operational": 2, 00:35:55.058 "base_bdevs_list": [ 00:35:55.058 { 00:35:55.058 "name": "pt1", 00:35:55.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:55.058 "is_configured": true, 00:35:55.058 "data_offset": 256, 00:35:55.058 "data_size": 7936 00:35:55.058 }, 00:35:55.058 { 00:35:55.058 "name": "pt2", 00:35:55.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:55.058 "is_configured": true, 00:35:55.058 "data_offset": 256, 00:35:55.058 "data_size": 7936 00:35:55.058 } 00:35:55.058 ] 00:35:55.058 } 00:35:55.058 } 00:35:55.058 }' 00:35:55.058 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:55.058 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:55.058 pt2' 00:35:55.058 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:55.058 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:55.058 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:55.318 "name": "pt1", 00:35:55.318 "aliases": [ 00:35:55.318 "00000000-0000-0000-0000-000000000001" 00:35:55.318 ], 00:35:55.318 "product_name": "passthru", 00:35:55.318 "block_size": 4096, 00:35:55.318 "num_blocks": 8192, 00:35:55.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:55.318 "assigned_rate_limits": { 00:35:55.318 "rw_ios_per_sec": 0, 00:35:55.318 "rw_mbytes_per_sec": 0, 00:35:55.318 "r_mbytes_per_sec": 0, 00:35:55.318 "w_mbytes_per_sec": 0 00:35:55.318 }, 00:35:55.318 "claimed": true, 00:35:55.318 "claim_type": "exclusive_write", 00:35:55.318 "zoned": false, 00:35:55.318 "supported_io_types": { 00:35:55.318 "read": true, 00:35:55.318 "write": true, 00:35:55.318 "unmap": true, 00:35:55.318 "flush": true, 00:35:55.318 "reset": true, 00:35:55.318 "nvme_admin": false, 00:35:55.318 "nvme_io": false, 00:35:55.318 "nvme_io_md": false, 00:35:55.318 "write_zeroes": true, 00:35:55.318 "zcopy": true, 00:35:55.318 "get_zone_info": false, 00:35:55.318 "zone_management": false, 00:35:55.318 "zone_append": false, 00:35:55.318 "compare": false, 00:35:55.318 "compare_and_write": false, 00:35:55.318 "abort": true, 00:35:55.318 "seek_hole": false, 00:35:55.318 "seek_data": false, 00:35:55.318 "copy": true, 00:35:55.318 "nvme_iov_md": false 00:35:55.318 }, 00:35:55.318 "memory_domains": [ 00:35:55.318 { 00:35:55.318 "dma_device_id": "system", 00:35:55.318 "dma_device_type": 1 00:35:55.318 }, 00:35:55.318 { 00:35:55.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.318 "dma_device_type": 2 00:35:55.318 } 00:35:55.318 ], 00:35:55.318 "driver_specific": { 00:35:55.318 "passthru": { 00:35:55.318 "name": "pt1", 00:35:55.318 "base_bdev_name": "malloc1" 00:35:55.318 } 00:35:55.318 } 00:35:55.318 }' 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:55.318 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:55.577 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:55.577 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:55.577 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:55.577 19:02:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:55.577 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:55.577 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:55.577 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:55.577 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:55.836 "name": "pt2", 00:35:55.836 "aliases": [ 00:35:55.836 "00000000-0000-0000-0000-000000000002" 00:35:55.836 ], 00:35:55.836 "product_name": "passthru", 00:35:55.836 "block_size": 4096, 00:35:55.836 "num_blocks": 8192, 00:35:55.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:55.836 "assigned_rate_limits": { 00:35:55.836 "rw_ios_per_sec": 0, 00:35:55.836 "rw_mbytes_per_sec": 0, 00:35:55.836 "r_mbytes_per_sec": 0, 00:35:55.836 "w_mbytes_per_sec": 0 00:35:55.836 }, 00:35:55.836 "claimed": true, 00:35:55.836 "claim_type": "exclusive_write", 00:35:55.836 "zoned": false, 00:35:55.836 "supported_io_types": { 00:35:55.836 "read": true, 00:35:55.836 "write": true, 00:35:55.836 "unmap": true, 00:35:55.836 "flush": true, 00:35:55.836 "reset": true, 00:35:55.836 "nvme_admin": false, 00:35:55.836 "nvme_io": false, 00:35:55.836 "nvme_io_md": false, 00:35:55.836 "write_zeroes": true, 00:35:55.836 "zcopy": true, 00:35:55.836 "get_zone_info": false, 00:35:55.836 "zone_management": false, 00:35:55.836 "zone_append": false, 00:35:55.836 "compare": false, 00:35:55.836 "compare_and_write": false, 00:35:55.836 "abort": true, 00:35:55.836 "seek_hole": false, 00:35:55.836 "seek_data": false, 00:35:55.836 "copy": true, 00:35:55.836 "nvme_iov_md": false 00:35:55.836 }, 00:35:55.836 "memory_domains": [ 00:35:55.836 { 00:35:55.836 "dma_device_id": "system", 00:35:55.836 "dma_device_type": 1 00:35:55.836 }, 00:35:55.836 { 00:35:55.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.836 "dma_device_type": 2 00:35:55.836 } 00:35:55.836 ], 00:35:55.836 "driver_specific": { 00:35:55.836 "passthru": { 00:35:55.836 "name": "pt2", 00:35:55.836 "base_bdev_name": "malloc2" 00:35:55.836 } 00:35:55.836 } 00:35:55.836 }' 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:55.836 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:56.095 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:56.095 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:56.095 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:56.095 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:56.095 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:56.095 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:35:56.354 [2024-07-25 19:02:56.708358] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:56.354 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=c7ab71ce-184b-4972-b4e8-a383d9e5897e 00:35:56.354 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z c7ab71ce-184b-4972-b4e8-a383d9e5897e ']' 00:35:56.355 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:56.614 [2024-07-25 19:02:56.956217] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:56.614 [2024-07-25 19:02:56.956404] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:56.614 [2024-07-25 19:02:56.956633] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:56.614 [2024-07-25 19:02:56.956788] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:56.614 [2024-07-25 19:02:56.956864] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:35:56.614 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.614 19:02:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:35:56.873 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:35:56.873 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:35:56.873 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:35:56.873 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:56.873 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:35:56.873 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:57.132 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:57.132 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:57.391 19:02:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:57.651 [2024-07-25 19:02:57.992350] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:57.651 [2024-07-25 19:02:57.994591] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:57.651 [2024-07-25 19:02:57.994786] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:57.651 [2024-07-25 19:02:57.995502] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:57.651 [2024-07-25 19:02:57.995762] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:57.651 [2024-07-25 19:02:57.995869] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:35:57.651 request: 00:35:57.651 { 00:35:57.651 "name": "raid_bdev1", 00:35:57.651 "raid_level": "raid1", 00:35:57.651 "base_bdevs": [ 00:35:57.651 "malloc1", 00:35:57.651 "malloc2" 00:35:57.651 ], 00:35:57.651 "superblock": false, 00:35:57.651 "method": "bdev_raid_create", 00:35:57.651 "req_id": 1 00:35:57.651 } 00:35:57.651 Got JSON-RPC error response 00:35:57.651 response: 00:35:57.651 { 00:35:57.651 "code": -17, 00:35:57.651 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:57.651 } 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:35:57.651 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:57.910 [2024-07-25 19:02:58.416433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:57.910 [2024-07-25 19:02:58.416840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.910 [2024-07-25 19:02:58.417108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:57.910 [2024-07-25 19:02:58.417334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.910 [2024-07-25 19:02:58.420107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.910 [2024-07-25 19:02:58.420390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:57.910 [2024-07-25 19:02:58.420711] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:57.910 [2024-07-25 19:02:58.420863] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:57.910 pt1 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:57.910 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:57.911 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:57.911 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.911 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.170 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:58.170 "name": "raid_bdev1", 00:35:58.170 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:35:58.170 "strip_size_kb": 0, 00:35:58.170 "state": "configuring", 00:35:58.170 "raid_level": "raid1", 00:35:58.170 "superblock": true, 00:35:58.170 "num_base_bdevs": 2, 00:35:58.170 "num_base_bdevs_discovered": 1, 00:35:58.170 "num_base_bdevs_operational": 2, 00:35:58.170 "base_bdevs_list": [ 00:35:58.170 { 00:35:58.170 "name": "pt1", 00:35:58.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:58.170 "is_configured": true, 00:35:58.170 "data_offset": 256, 00:35:58.170 "data_size": 7936 00:35:58.170 }, 00:35:58.170 { 00:35:58.170 "name": null, 00:35:58.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:58.170 "is_configured": false, 00:35:58.170 "data_offset": 256, 00:35:58.170 "data_size": 7936 00:35:58.170 } 00:35:58.170 ] 00:35:58.170 }' 00:35:58.170 19:02:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:58.170 19:02:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:58.738 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:35:58.738 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:35:58.738 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:35:58.738 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:58.738 [2024-07-25 19:02:59.280883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:58.738 [2024-07-25 19:02:59.281579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.738 [2024-07-25 19:02:59.281867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:58.738 [2024-07-25 19:02:59.282104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.738 [2024-07-25 19:02:59.282828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.738 [2024-07-25 19:02:59.283082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:58.738 [2024-07-25 19:02:59.283401] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:58.738 [2024-07-25 19:02:59.283530] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:58.738 [2024-07-25 19:02:59.283717] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:35:58.738 [2024-07-25 19:02:59.283858] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:58.738 [2024-07-25 19:02:59.283993] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:35:58.738 [2024-07-25 19:02:59.284478] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:35:58.738 [2024-07-25 19:02:59.284573] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:35:58.738 [2024-07-25 19:02:59.284783] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:58.738 pt2 00:35:58.738 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.739 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.997 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:58.997 "name": "raid_bdev1", 00:35:58.997 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:35:58.997 "strip_size_kb": 0, 00:35:58.997 "state": "online", 00:35:58.997 "raid_level": "raid1", 00:35:58.997 "superblock": true, 00:35:58.997 "num_base_bdevs": 2, 00:35:58.997 "num_base_bdevs_discovered": 2, 00:35:58.997 "num_base_bdevs_operational": 2, 00:35:58.997 "base_bdevs_list": [ 00:35:58.997 { 00:35:58.997 "name": "pt1", 00:35:58.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:58.997 "is_configured": true, 00:35:58.997 "data_offset": 256, 00:35:58.997 "data_size": 7936 00:35:58.997 }, 00:35:58.997 { 00:35:58.997 "name": "pt2", 00:35:58.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:58.997 "is_configured": true, 00:35:58.997 "data_offset": 256, 00:35:58.997 "data_size": 7936 00:35:58.998 } 00:35:58.998 ] 00:35:58.998 }' 00:35:58.998 19:02:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:58.998 19:02:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:59.565 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:35:59.566 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:59.566 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:59.566 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:59.566 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:59.566 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:59.566 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:59.566 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:59.825 [2024-07-25 19:03:00.257240] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:59.825 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:59.825 "name": "raid_bdev1", 00:35:59.825 "aliases": [ 00:35:59.825 "c7ab71ce-184b-4972-b4e8-a383d9e5897e" 00:35:59.825 ], 00:35:59.825 "product_name": "Raid Volume", 00:35:59.825 "block_size": 4096, 00:35:59.825 "num_blocks": 7936, 00:35:59.825 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:35:59.825 "assigned_rate_limits": { 00:35:59.825 "rw_ios_per_sec": 0, 00:35:59.825 "rw_mbytes_per_sec": 0, 00:35:59.825 "r_mbytes_per_sec": 0, 00:35:59.825 "w_mbytes_per_sec": 0 00:35:59.825 }, 00:35:59.825 "claimed": false, 00:35:59.825 "zoned": false, 00:35:59.825 "supported_io_types": { 00:35:59.825 "read": true, 00:35:59.825 "write": true, 00:35:59.825 "unmap": false, 00:35:59.825 "flush": false, 00:35:59.825 "reset": true, 00:35:59.825 "nvme_admin": false, 00:35:59.825 "nvme_io": false, 00:35:59.825 "nvme_io_md": false, 00:35:59.825 "write_zeroes": true, 00:35:59.825 "zcopy": false, 00:35:59.825 "get_zone_info": false, 00:35:59.825 "zone_management": false, 00:35:59.825 "zone_append": false, 00:35:59.825 "compare": false, 00:35:59.825 "compare_and_write": false, 00:35:59.825 "abort": false, 00:35:59.825 "seek_hole": false, 00:35:59.825 "seek_data": false, 00:35:59.825 "copy": false, 00:35:59.825 "nvme_iov_md": false 00:35:59.825 }, 00:35:59.825 "memory_domains": [ 00:35:59.825 { 00:35:59.825 "dma_device_id": "system", 00:35:59.825 "dma_device_type": 1 00:35:59.825 }, 00:35:59.825 { 00:35:59.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.825 "dma_device_type": 2 00:35:59.825 }, 00:35:59.825 { 00:35:59.825 "dma_device_id": "system", 00:35:59.825 "dma_device_type": 1 00:35:59.825 }, 00:35:59.825 { 00:35:59.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.825 "dma_device_type": 2 00:35:59.825 } 00:35:59.825 ], 00:35:59.825 "driver_specific": { 00:35:59.825 "raid": { 00:35:59.825 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:35:59.825 "strip_size_kb": 0, 00:35:59.825 "state": "online", 00:35:59.825 "raid_level": "raid1", 00:35:59.825 "superblock": true, 00:35:59.825 "num_base_bdevs": 2, 00:35:59.825 "num_base_bdevs_discovered": 2, 00:35:59.825 "num_base_bdevs_operational": 2, 00:35:59.825 "base_bdevs_list": [ 00:35:59.825 { 00:35:59.825 "name": "pt1", 00:35:59.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:59.825 "is_configured": true, 00:35:59.825 "data_offset": 256, 00:35:59.825 "data_size": 7936 00:35:59.825 }, 00:35:59.825 { 00:35:59.825 "name": "pt2", 00:35:59.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:59.825 "is_configured": true, 00:35:59.825 "data_offset": 256, 00:35:59.825 "data_size": 7936 00:35:59.825 } 00:35:59.825 ] 00:35:59.825 } 00:35:59.825 } 00:35:59.825 }' 00:35:59.825 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:59.825 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:59.825 pt2' 00:35:59.825 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:59.825 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:59.825 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:00.084 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:00.084 "name": "pt1", 00:36:00.084 "aliases": [ 00:36:00.084 "00000000-0000-0000-0000-000000000001" 00:36:00.084 ], 00:36:00.084 "product_name": "passthru", 00:36:00.084 "block_size": 4096, 00:36:00.084 "num_blocks": 8192, 00:36:00.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:00.084 "assigned_rate_limits": { 00:36:00.084 "rw_ios_per_sec": 0, 00:36:00.084 "rw_mbytes_per_sec": 0, 00:36:00.084 "r_mbytes_per_sec": 0, 00:36:00.084 "w_mbytes_per_sec": 0 00:36:00.084 }, 00:36:00.084 "claimed": true, 00:36:00.084 "claim_type": "exclusive_write", 00:36:00.084 "zoned": false, 00:36:00.084 "supported_io_types": { 00:36:00.084 "read": true, 00:36:00.084 "write": true, 00:36:00.084 "unmap": true, 00:36:00.084 "flush": true, 00:36:00.084 "reset": true, 00:36:00.084 "nvme_admin": false, 00:36:00.084 "nvme_io": false, 00:36:00.084 "nvme_io_md": false, 00:36:00.084 "write_zeroes": true, 00:36:00.084 "zcopy": true, 00:36:00.084 "get_zone_info": false, 00:36:00.084 "zone_management": false, 00:36:00.084 "zone_append": false, 00:36:00.084 "compare": false, 00:36:00.084 "compare_and_write": false, 00:36:00.084 "abort": true, 00:36:00.084 "seek_hole": false, 00:36:00.084 "seek_data": false, 00:36:00.084 "copy": true, 00:36:00.084 "nvme_iov_md": false 00:36:00.084 }, 00:36:00.084 "memory_domains": [ 00:36:00.084 { 00:36:00.084 "dma_device_id": "system", 00:36:00.084 "dma_device_type": 1 00:36:00.084 }, 00:36:00.084 { 00:36:00.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:00.084 "dma_device_type": 2 00:36:00.084 } 00:36:00.084 ], 00:36:00.084 "driver_specific": { 00:36:00.084 "passthru": { 00:36:00.084 "name": "pt1", 00:36:00.084 "base_bdev_name": "malloc1" 00:36:00.084 } 00:36:00.084 } 00:36:00.084 }' 00:36:00.084 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:00.084 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:00.084 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:00.084 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:00.084 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:00.343 19:03:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:00.602 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:00.602 "name": "pt2", 00:36:00.602 "aliases": [ 00:36:00.602 "00000000-0000-0000-0000-000000000002" 00:36:00.602 ], 00:36:00.602 "product_name": "passthru", 00:36:00.602 "block_size": 4096, 00:36:00.602 "num_blocks": 8192, 00:36:00.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:00.602 "assigned_rate_limits": { 00:36:00.602 "rw_ios_per_sec": 0, 00:36:00.602 "rw_mbytes_per_sec": 0, 00:36:00.602 "r_mbytes_per_sec": 0, 00:36:00.602 "w_mbytes_per_sec": 0 00:36:00.602 }, 00:36:00.602 "claimed": true, 00:36:00.602 "claim_type": "exclusive_write", 00:36:00.602 "zoned": false, 00:36:00.602 "supported_io_types": { 00:36:00.602 "read": true, 00:36:00.602 "write": true, 00:36:00.602 "unmap": true, 00:36:00.602 "flush": true, 00:36:00.602 "reset": true, 00:36:00.602 "nvme_admin": false, 00:36:00.602 "nvme_io": false, 00:36:00.602 "nvme_io_md": false, 00:36:00.602 "write_zeroes": true, 00:36:00.602 "zcopy": true, 00:36:00.602 "get_zone_info": false, 00:36:00.602 "zone_management": false, 00:36:00.602 "zone_append": false, 00:36:00.602 "compare": false, 00:36:00.602 "compare_and_write": false, 00:36:00.602 "abort": true, 00:36:00.602 "seek_hole": false, 00:36:00.602 "seek_data": false, 00:36:00.602 "copy": true, 00:36:00.602 "nvme_iov_md": false 00:36:00.602 }, 00:36:00.602 "memory_domains": [ 00:36:00.602 { 00:36:00.602 "dma_device_id": "system", 00:36:00.602 "dma_device_type": 1 00:36:00.602 }, 00:36:00.602 { 00:36:00.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:00.602 "dma_device_type": 2 00:36:00.602 } 00:36:00.602 ], 00:36:00.602 "driver_specific": { 00:36:00.602 "passthru": { 00:36:00.602 "name": "pt2", 00:36:00.602 "base_bdev_name": "malloc2" 00:36:00.602 } 00:36:00.602 } 00:36:00.602 }' 00:36:00.602 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:00.602 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:00.861 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:01.120 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:01.120 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:01.120 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:36:01.380 [2024-07-25 19:03:01.705506] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' c7ab71ce-184b-4972-b4e8-a383d9e5897e '!=' c7ab71ce-184b-4972-b4e8-a383d9e5897e ']' 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:01.380 [2024-07-25 19:03:01.881361] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.380 19:03:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.639 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:01.639 "name": "raid_bdev1", 00:36:01.639 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:36:01.639 "strip_size_kb": 0, 00:36:01.639 "state": "online", 00:36:01.639 "raid_level": "raid1", 00:36:01.639 "superblock": true, 00:36:01.639 "num_base_bdevs": 2, 00:36:01.639 "num_base_bdevs_discovered": 1, 00:36:01.639 "num_base_bdevs_operational": 1, 00:36:01.639 "base_bdevs_list": [ 00:36:01.639 { 00:36:01.639 "name": null, 00:36:01.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.639 "is_configured": false, 00:36:01.639 "data_offset": 256, 00:36:01.639 "data_size": 7936 00:36:01.639 }, 00:36:01.639 { 00:36:01.639 "name": "pt2", 00:36:01.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:01.639 "is_configured": true, 00:36:01.639 "data_offset": 256, 00:36:01.639 "data_size": 7936 00:36:01.639 } 00:36:01.639 ] 00:36:01.639 }' 00:36:01.639 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:01.639 19:03:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:02.208 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:02.208 [2024-07-25 19:03:02.781421] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:02.208 [2024-07-25 19:03:02.781540] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:02.208 [2024-07-25 19:03:02.781795] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:02.208 [2024-07-25 19:03:02.781924] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:02.208 [2024-07-25 19:03:02.781997] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:36:02.468 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.468 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:36:02.468 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:36:02.468 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:36:02.468 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:36:02.468 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:36:02.468 19:03:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:02.727 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:36:02.727 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:36:02.727 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:36:02.727 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:36:02.727 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:36:02.727 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:02.986 [2024-07-25 19:03:03.369526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:02.986 [2024-07-25 19:03:03.370333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.986 [2024-07-25 19:03:03.370594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:02.986 [2024-07-25 19:03:03.370816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.986 [2024-07-25 19:03:03.373592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.986 [2024-07-25 19:03:03.373892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:02.986 [2024-07-25 19:03:03.374233] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:02.986 [2024-07-25 19:03:03.374416] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:02.986 [2024-07-25 19:03:03.374681] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:36:02.986 [2024-07-25 19:03:03.374785] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:02.986 [2024-07-25 19:03:03.374905] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:02.986 pt2 00:36:02.986 [2024-07-25 19:03:03.375346] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:36:02.986 [2024-07-25 19:03:03.375442] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:36:02.986 [2024-07-25 19:03:03.375648] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.986 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.244 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:03.244 "name": "raid_bdev1", 00:36:03.244 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:36:03.244 "strip_size_kb": 0, 00:36:03.244 "state": "online", 00:36:03.244 "raid_level": "raid1", 00:36:03.244 "superblock": true, 00:36:03.244 "num_base_bdevs": 2, 00:36:03.244 "num_base_bdevs_discovered": 1, 00:36:03.244 "num_base_bdevs_operational": 1, 00:36:03.244 "base_bdevs_list": [ 00:36:03.244 { 00:36:03.244 "name": null, 00:36:03.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.244 "is_configured": false, 00:36:03.244 "data_offset": 256, 00:36:03.244 "data_size": 7936 00:36:03.244 }, 00:36:03.244 { 00:36:03.244 "name": "pt2", 00:36:03.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:03.244 "is_configured": true, 00:36:03.244 "data_offset": 256, 00:36:03.244 "data_size": 7936 00:36:03.244 } 00:36:03.244 ] 00:36:03.244 }' 00:36:03.244 19:03:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:03.244 19:03:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:03.810 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:03.810 [2024-07-25 19:03:04.366301] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:03.810 [2024-07-25 19:03:04.366450] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:03.810 [2024-07-25 19:03:04.366604] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:03.810 [2024-07-25 19:03:04.366741] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:03.810 [2024-07-25 19:03:04.366818] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:36:03.810 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:36:03.810 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.068 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:36:04.068 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:36:04.068 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:36:04.068 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:04.326 [2024-07-25 19:03:04.790351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:04.326 [2024-07-25 19:03:04.790532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:04.326 [2024-07-25 19:03:04.790605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:04.326 [2024-07-25 19:03:04.790692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:04.326 [2024-07-25 19:03:04.792993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:04.326 [2024-07-25 19:03:04.793166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:04.326 [2024-07-25 19:03:04.793353] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:04.326 [2024-07-25 19:03:04.793468] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:04.326 [2024-07-25 19:03:04.793639] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:04.326 [2024-07-25 19:03:04.793730] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:04.326 [2024-07-25 19:03:04.793786] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:36:04.327 [2024-07-25 19:03:04.793964] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:04.327 [2024-07-25 19:03:04.794068] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:36:04.327 [2024-07-25 19:03:04.794157] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:04.327 [2024-07-25 19:03:04.794271] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:04.327 [2024-07-25 19:03:04.794671] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:36:04.327 [2024-07-25 19:03:04.794778] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:36:04.327 [2024-07-25 19:03:04.795020] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:04.327 pt1 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.327 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.585 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:04.585 "name": "raid_bdev1", 00:36:04.585 "uuid": "c7ab71ce-184b-4972-b4e8-a383d9e5897e", 00:36:04.585 "strip_size_kb": 0, 00:36:04.585 "state": "online", 00:36:04.585 "raid_level": "raid1", 00:36:04.585 "superblock": true, 00:36:04.585 "num_base_bdevs": 2, 00:36:04.585 "num_base_bdevs_discovered": 1, 00:36:04.585 "num_base_bdevs_operational": 1, 00:36:04.585 "base_bdevs_list": [ 00:36:04.585 { 00:36:04.585 "name": null, 00:36:04.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.585 "is_configured": false, 00:36:04.585 "data_offset": 256, 00:36:04.585 "data_size": 7936 00:36:04.585 }, 00:36:04.585 { 00:36:04.585 "name": "pt2", 00:36:04.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:04.585 "is_configured": true, 00:36:04.585 "data_offset": 256, 00:36:04.585 "data_size": 7936 00:36:04.585 } 00:36:04.585 ] 00:36:04.585 }' 00:36:04.585 19:03:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:04.585 19:03:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:05.151 19:03:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:36:05.151 19:03:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:05.409 19:03:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:36:05.409 19:03:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:36:05.409 19:03:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:05.668 [2024-07-25 19:03:06.079349] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' c7ab71ce-184b-4972-b4e8-a383d9e5897e '!=' c7ab71ce-184b-4972-b4e8-a383d9e5897e ']' 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 158211 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 158211 ']' 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 158211 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158211 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158211' 00:36:05.668 killing process with pid 158211 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 158211 00:36:05.668 [2024-07-25 19:03:06.134526] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:05.668 19:03:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 158211 00:36:05.668 [2024-07-25 19:03:06.134697] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:05.668 [2024-07-25 19:03:06.134744] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:05.668 [2024-07-25 19:03:06.134752] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:36:05.927 [2024-07-25 19:03:06.287519] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:06.863 ************************************ 00:36:06.863 END TEST raid_superblock_test_4k 00:36:06.863 ************************************ 00:36:06.863 19:03:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:36:06.863 00:36:06.863 real 0m14.801s 00:36:06.863 user 0m25.736s 00:36:06.863 sys 0m2.768s 00:36:06.863 19:03:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:06.863 19:03:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:06.863 19:03:07 bdev_raid -- bdev/bdev_raid.sh@980 -- # '[' true = true ']' 00:36:06.863 19:03:07 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:36:06.863 19:03:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:36:06.863 19:03:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:06.863 19:03:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:06.863 ************************************ 00:36:06.863 START TEST raid_rebuild_test_sb_4k 00:36:06.863 ************************************ 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=158722 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 158722 /var/tmp/spdk-raid.sock 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 158722 ']' 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:06.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:06.863 19:03:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:07.121 [2024-07-25 19:03:07.480667] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:07.121 [2024-07-25 19:03:07.481096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158722 ] 00:36:07.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:07.121 Zero copy mechanism will not be used. 00:36:07.121 [2024-07-25 19:03:07.673294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.687 [2024-07-25 19:03:07.996173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.945 [2024-07-25 19:03:08.269734] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:07.945 19:03:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:07.945 19:03:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:36:07.945 19:03:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:36:07.945 19:03:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:36:08.202 BaseBdev1_malloc 00:36:08.202 19:03:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:08.459 [2024-07-25 19:03:08.814864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:08.459 [2024-07-25 19:03:08.815773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:08.459 [2024-07-25 19:03:08.816031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:36:08.459 [2024-07-25 19:03:08.816253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:08.459 [2024-07-25 19:03:08.819139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:08.459 [2024-07-25 19:03:08.819390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:08.459 BaseBdev1 00:36:08.459 19:03:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:36:08.459 19:03:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:36:08.718 BaseBdev2_malloc 00:36:08.718 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:08.718 [2024-07-25 19:03:09.233026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:08.718 [2024-07-25 19:03:09.233540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:08.718 [2024-07-25 19:03:09.233792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:08.718 [2024-07-25 19:03:09.234020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:08.718 [2024-07-25 19:03:09.236730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:08.718 [2024-07-25 19:03:09.236963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:08.718 BaseBdev2 00:36:08.718 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:36:08.975 spare_malloc 00:36:08.975 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:09.232 spare_delay 00:36:09.232 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:09.232 [2024-07-25 19:03:09.810880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:09.232 [2024-07-25 19:03:09.811420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:09.232 [2024-07-25 19:03:09.811677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:09.232 [2024-07-25 19:03:09.811897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:09.489 [2024-07-25 19:03:09.814682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:09.489 [2024-07-25 19:03:09.814939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:09.489 spare 00:36:09.489 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:36:09.489 [2024-07-25 19:03:09.987352] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:09.489 [2024-07-25 19:03:09.989601] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:09.489 [2024-07-25 19:03:09.989905] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:36:09.489 [2024-07-25 19:03:09.990004] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:09.490 [2024-07-25 19:03:09.990172] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:36:09.490 [2024-07-25 19:03:09.990660] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:36:09.490 [2024-07-25 19:03:09.990756] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:36:09.490 [2024-07-25 19:03:09.990995] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:09.490 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:09.490 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:09.490 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:09.490 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:09.490 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:09.490 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:09.490 19:03:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:09.490 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:09.490 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:09.490 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:09.490 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.490 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.747 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:09.747 "name": "raid_bdev1", 00:36:09.747 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:09.747 "strip_size_kb": 0, 00:36:09.747 "state": "online", 00:36:09.747 "raid_level": "raid1", 00:36:09.747 "superblock": true, 00:36:09.747 "num_base_bdevs": 2, 00:36:09.747 "num_base_bdevs_discovered": 2, 00:36:09.747 "num_base_bdevs_operational": 2, 00:36:09.747 "base_bdevs_list": [ 00:36:09.747 { 00:36:09.747 "name": "BaseBdev1", 00:36:09.747 "uuid": "7ea8b458-fa1c-5bed-a269-4c385c20342f", 00:36:09.747 "is_configured": true, 00:36:09.747 "data_offset": 256, 00:36:09.747 "data_size": 7936 00:36:09.747 }, 00:36:09.747 { 00:36:09.747 "name": "BaseBdev2", 00:36:09.747 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:09.747 "is_configured": true, 00:36:09.747 "data_offset": 256, 00:36:09.747 "data_size": 7936 00:36:09.747 } 00:36:09.747 ] 00:36:09.747 }' 00:36:09.747 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:09.747 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:10.312 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:10.312 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:36:10.312 [2024-07-25 19:03:10.827627] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:10.312 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:36:10.312 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:10.312 19:03:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:10.571 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:10.572 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:10.572 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:36:10.572 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:10.572 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:10.572 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:10.830 [2024-07-25 19:03:11.339653] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:10.830 /dev/nbd0 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:10.830 1+0 records in 00:36:10.830 1+0 records out 00:36:10.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488014 s, 8.4 MB/s 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:36:10.830 19:03:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:36:11.824 7936+0 records in 00:36:11.824 7936+0 records out 00:36:11.824 32505856 bytes (33 MB, 31 MiB) copied, 0.668499 s, 48.6 MB/s 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:11.824 [2024-07-25 19:03:12.320425] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:11.824 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:12.082 [2024-07-25 19:03:12.532114] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:12.082 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.340 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:12.340 "name": "raid_bdev1", 00:36:12.340 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:12.340 "strip_size_kb": 0, 00:36:12.340 "state": "online", 00:36:12.340 "raid_level": "raid1", 00:36:12.340 "superblock": true, 00:36:12.340 "num_base_bdevs": 2, 00:36:12.340 "num_base_bdevs_discovered": 1, 00:36:12.340 "num_base_bdevs_operational": 1, 00:36:12.340 "base_bdevs_list": [ 00:36:12.340 { 00:36:12.340 "name": null, 00:36:12.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.340 "is_configured": false, 00:36:12.340 "data_offset": 256, 00:36:12.340 "data_size": 7936 00:36:12.340 }, 00:36:12.340 { 00:36:12.340 "name": "BaseBdev2", 00:36:12.340 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:12.340 "is_configured": true, 00:36:12.340 "data_offset": 256, 00:36:12.340 "data_size": 7936 00:36:12.340 } 00:36:12.340 ] 00:36:12.340 }' 00:36:12.340 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:12.340 19:03:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:12.906 19:03:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:12.906 [2024-07-25 19:03:13.436244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:12.906 [2024-07-25 19:03:13.456559] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:36:12.906 [2024-07-25 19:03:13.458983] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:12.906 19:03:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.279 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:14.279 "name": "raid_bdev1", 00:36:14.279 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:14.279 "strip_size_kb": 0, 00:36:14.279 "state": "online", 00:36:14.279 "raid_level": "raid1", 00:36:14.279 "superblock": true, 00:36:14.279 "num_base_bdevs": 2, 00:36:14.279 "num_base_bdevs_discovered": 2, 00:36:14.279 "num_base_bdevs_operational": 2, 00:36:14.279 "process": { 00:36:14.279 "type": "rebuild", 00:36:14.279 "target": "spare", 00:36:14.279 "progress": { 00:36:14.279 "blocks": 3072, 00:36:14.279 "percent": 38 00:36:14.279 } 00:36:14.279 }, 00:36:14.279 "base_bdevs_list": [ 00:36:14.279 { 00:36:14.279 "name": "spare", 00:36:14.279 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:14.279 "is_configured": true, 00:36:14.279 "data_offset": 256, 00:36:14.279 "data_size": 7936 00:36:14.279 }, 00:36:14.279 { 00:36:14.279 "name": "BaseBdev2", 00:36:14.279 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:14.279 "is_configured": true, 00:36:14.280 "data_offset": 256, 00:36:14.280 "data_size": 7936 00:36:14.280 } 00:36:14.280 ] 00:36:14.280 }' 00:36:14.280 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:14.280 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:14.280 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:14.280 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:14.280 19:03:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:14.538 [2024-07-25 19:03:15.064636] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:14.538 [2024-07-25 19:03:15.071366] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:14.538 [2024-07-25 19:03:15.071915] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:14.538 [2024-07-25 19:03:15.071947] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:14.538 [2024-07-25 19:03:15.071958] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:14.795 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:14.795 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:14.796 "name": "raid_bdev1", 00:36:14.796 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:14.796 "strip_size_kb": 0, 00:36:14.796 "state": "online", 00:36:14.796 "raid_level": "raid1", 00:36:14.796 "superblock": true, 00:36:14.796 "num_base_bdevs": 2, 00:36:14.796 "num_base_bdevs_discovered": 1, 00:36:14.796 "num_base_bdevs_operational": 1, 00:36:14.796 "base_bdevs_list": [ 00:36:14.796 { 00:36:14.796 "name": null, 00:36:14.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.796 "is_configured": false, 00:36:14.796 "data_offset": 256, 00:36:14.796 "data_size": 7936 00:36:14.796 }, 00:36:14.796 { 00:36:14.796 "name": "BaseBdev2", 00:36:14.796 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:14.796 "is_configured": true, 00:36:14.796 "data_offset": 256, 00:36:14.796 "data_size": 7936 00:36:14.796 } 00:36:14.796 ] 00:36:14.796 }' 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:14.796 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:15.363 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:15.363 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:15.363 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:15.363 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:15.363 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:15.363 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.363 19:03:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.623 19:03:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:15.623 "name": "raid_bdev1", 00:36:15.623 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:15.623 "strip_size_kb": 0, 00:36:15.623 "state": "online", 00:36:15.623 "raid_level": "raid1", 00:36:15.623 "superblock": true, 00:36:15.623 "num_base_bdevs": 2, 00:36:15.623 "num_base_bdevs_discovered": 1, 00:36:15.623 "num_base_bdevs_operational": 1, 00:36:15.623 "base_bdevs_list": [ 00:36:15.623 { 00:36:15.623 "name": null, 00:36:15.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.623 "is_configured": false, 00:36:15.623 "data_offset": 256, 00:36:15.623 "data_size": 7936 00:36:15.623 }, 00:36:15.623 { 00:36:15.623 "name": "BaseBdev2", 00:36:15.623 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:15.623 "is_configured": true, 00:36:15.623 "data_offset": 256, 00:36:15.623 "data_size": 7936 00:36:15.623 } 00:36:15.623 ] 00:36:15.623 }' 00:36:15.623 19:03:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:15.623 19:03:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:15.623 19:03:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:15.623 19:03:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:15.623 19:03:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:15.881 [2024-07-25 19:03:16.355333] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:15.881 [2024-07-25 19:03:16.373938] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:36:15.881 [2024-07-25 19:03:16.376139] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:15.881 19:03:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:36:16.816 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:16.816 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:16.816 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:16.816 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:16.816 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:17.074 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.075 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.075 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:17.075 "name": "raid_bdev1", 00:36:17.075 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:17.075 "strip_size_kb": 0, 00:36:17.075 "state": "online", 00:36:17.075 "raid_level": "raid1", 00:36:17.075 "superblock": true, 00:36:17.075 "num_base_bdevs": 2, 00:36:17.075 "num_base_bdevs_discovered": 2, 00:36:17.075 "num_base_bdevs_operational": 2, 00:36:17.075 "process": { 00:36:17.075 "type": "rebuild", 00:36:17.075 "target": "spare", 00:36:17.075 "progress": { 00:36:17.075 "blocks": 3072, 00:36:17.075 "percent": 38 00:36:17.075 } 00:36:17.075 }, 00:36:17.075 "base_bdevs_list": [ 00:36:17.075 { 00:36:17.075 "name": "spare", 00:36:17.075 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:17.075 "is_configured": true, 00:36:17.075 "data_offset": 256, 00:36:17.075 "data_size": 7936 00:36:17.075 }, 00:36:17.075 { 00:36:17.075 "name": "BaseBdev2", 00:36:17.075 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:17.075 "is_configured": true, 00:36:17.075 "data_offset": 256, 00:36:17.075 "data_size": 7936 00:36:17.075 } 00:36:17.075 ] 00:36:17.075 }' 00:36:17.075 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:36:17.333 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=1334 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:17.333 "name": "raid_bdev1", 00:36:17.333 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:17.333 "strip_size_kb": 0, 00:36:17.333 "state": "online", 00:36:17.333 "raid_level": "raid1", 00:36:17.333 "superblock": true, 00:36:17.333 "num_base_bdevs": 2, 00:36:17.333 "num_base_bdevs_discovered": 2, 00:36:17.333 "num_base_bdevs_operational": 2, 00:36:17.333 "process": { 00:36:17.333 "type": "rebuild", 00:36:17.333 "target": "spare", 00:36:17.333 "progress": { 00:36:17.333 "blocks": 3584, 00:36:17.333 "percent": 45 00:36:17.333 } 00:36:17.333 }, 00:36:17.333 "base_bdevs_list": [ 00:36:17.333 { 00:36:17.333 "name": "spare", 00:36:17.333 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:17.333 "is_configured": true, 00:36:17.333 "data_offset": 256, 00:36:17.333 "data_size": 7936 00:36:17.333 }, 00:36:17.333 { 00:36:17.333 "name": "BaseBdev2", 00:36:17.333 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:17.333 "is_configured": true, 00:36:17.333 "data_offset": 256, 00:36:17.333 "data_size": 7936 00:36:17.333 } 00:36:17.333 ] 00:36:17.333 }' 00:36:17.333 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:17.591 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:17.591 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:17.591 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:17.591 19:03:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:18.526 19:03:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:18.785 19:03:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:18.785 "name": "raid_bdev1", 00:36:18.785 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:18.785 "strip_size_kb": 0, 00:36:18.785 "state": "online", 00:36:18.785 "raid_level": "raid1", 00:36:18.785 "superblock": true, 00:36:18.785 "num_base_bdevs": 2, 00:36:18.785 "num_base_bdevs_discovered": 2, 00:36:18.785 "num_base_bdevs_operational": 2, 00:36:18.785 "process": { 00:36:18.785 "type": "rebuild", 00:36:18.785 "target": "spare", 00:36:18.785 "progress": { 00:36:18.785 "blocks": 7168, 00:36:18.785 "percent": 90 00:36:18.785 } 00:36:18.785 }, 00:36:18.785 "base_bdevs_list": [ 00:36:18.785 { 00:36:18.785 "name": "spare", 00:36:18.785 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:18.785 "is_configured": true, 00:36:18.785 "data_offset": 256, 00:36:18.785 "data_size": 7936 00:36:18.785 }, 00:36:18.785 { 00:36:18.785 "name": "BaseBdev2", 00:36:18.785 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:18.785 "is_configured": true, 00:36:18.785 "data_offset": 256, 00:36:18.785 "data_size": 7936 00:36:18.785 } 00:36:18.785 ] 00:36:18.785 }' 00:36:18.785 19:03:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:18.785 19:03:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:18.785 19:03:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:18.785 19:03:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:18.785 19:03:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:36:19.044 [2024-07-25 19:03:19.499318] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:19.044 [2024-07-25 19:03:19.499385] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:19.044 [2024-07-25 19:03:19.500121] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.982 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:20.242 "name": "raid_bdev1", 00:36:20.242 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:20.242 "strip_size_kb": 0, 00:36:20.242 "state": "online", 00:36:20.242 "raid_level": "raid1", 00:36:20.242 "superblock": true, 00:36:20.242 "num_base_bdevs": 2, 00:36:20.242 "num_base_bdevs_discovered": 2, 00:36:20.242 "num_base_bdevs_operational": 2, 00:36:20.242 "base_bdevs_list": [ 00:36:20.242 { 00:36:20.242 "name": "spare", 00:36:20.242 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:20.242 "is_configured": true, 00:36:20.242 "data_offset": 256, 00:36:20.242 "data_size": 7936 00:36:20.242 }, 00:36:20.242 { 00:36:20.242 "name": "BaseBdev2", 00:36:20.242 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:20.242 "is_configured": true, 00:36:20.242 "data_offset": 256, 00:36:20.242 "data_size": 7936 00:36:20.242 } 00:36:20.242 ] 00:36:20.242 }' 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:20.242 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:20.500 "name": "raid_bdev1", 00:36:20.500 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:20.500 "strip_size_kb": 0, 00:36:20.500 "state": "online", 00:36:20.500 "raid_level": "raid1", 00:36:20.500 "superblock": true, 00:36:20.500 "num_base_bdevs": 2, 00:36:20.500 "num_base_bdevs_discovered": 2, 00:36:20.500 "num_base_bdevs_operational": 2, 00:36:20.500 "base_bdevs_list": [ 00:36:20.500 { 00:36:20.500 "name": "spare", 00:36:20.500 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:20.500 "is_configured": true, 00:36:20.500 "data_offset": 256, 00:36:20.500 "data_size": 7936 00:36:20.500 }, 00:36:20.500 { 00:36:20.500 "name": "BaseBdev2", 00:36:20.500 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:20.500 "is_configured": true, 00:36:20.500 "data_offset": 256, 00:36:20.500 "data_size": 7936 00:36:20.500 } 00:36:20.500 ] 00:36:20.500 }' 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:20.500 19:03:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.758 19:03:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:20.758 "name": "raid_bdev1", 00:36:20.758 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:20.758 "strip_size_kb": 0, 00:36:20.758 "state": "online", 00:36:20.758 "raid_level": "raid1", 00:36:20.758 "superblock": true, 00:36:20.758 "num_base_bdevs": 2, 00:36:20.758 "num_base_bdevs_discovered": 2, 00:36:20.758 "num_base_bdevs_operational": 2, 00:36:20.758 "base_bdevs_list": [ 00:36:20.758 { 00:36:20.758 "name": "spare", 00:36:20.758 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:20.758 "is_configured": true, 00:36:20.758 "data_offset": 256, 00:36:20.758 "data_size": 7936 00:36:20.758 }, 00:36:20.758 { 00:36:20.758 "name": "BaseBdev2", 00:36:20.758 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:20.758 "is_configured": true, 00:36:20.758 "data_offset": 256, 00:36:20.758 "data_size": 7936 00:36:20.758 } 00:36:20.758 ] 00:36:20.758 }' 00:36:20.758 19:03:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:20.758 19:03:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.324 19:03:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:21.583 [2024-07-25 19:03:22.030752] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:21.583 [2024-07-25 19:03:22.030786] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:21.583 [2024-07-25 19:03:22.030911] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:21.583 [2024-07-25 19:03:22.031009] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:21.583 [2024-07-25 19:03:22.031018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:36:21.583 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.583 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:21.842 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:22.103 /dev/nbd0 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:22.103 1+0 records in 00:36:22.103 1+0 records out 00:36:22.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493199 s, 8.3 MB/s 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:22.103 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:22.362 /dev/nbd1 00:36:22.362 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:22.362 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:22.362 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:22.363 1+0 records in 00:36:22.363 1+0 records out 00:36:22.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384448 s, 10.7 MB/s 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:22.363 19:03:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:22.622 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:22.622 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:22.622 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:22.622 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:22.622 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:36:22.622 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:22.622 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:22.882 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:36:23.141 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:23.399 19:03:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:23.657 [2024-07-25 19:03:24.000934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:23.657 [2024-07-25 19:03:24.001567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:23.657 [2024-07-25 19:03:24.001724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:23.657 [2024-07-25 19:03:24.001850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:23.657 [2024-07-25 19:03:24.004615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:23.657 [2024-07-25 19:03:24.004776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:23.657 [2024-07-25 19:03:24.004991] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:23.657 [2024-07-25 19:03:24.005066] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:23.657 [2024-07-25 19:03:24.005251] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:23.657 spare 00:36:23.657 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.658 [2024-07-25 19:03:24.105353] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:36:23.658 [2024-07-25 19:03:24.105372] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:23.658 [2024-07-25 19:03:24.105515] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:36:23.658 [2024-07-25 19:03:24.105867] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:36:23.658 [2024-07-25 19:03:24.105884] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:36:23.658 [2024-07-25 19:03:24.106035] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:23.658 "name": "raid_bdev1", 00:36:23.658 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:23.658 "strip_size_kb": 0, 00:36:23.658 "state": "online", 00:36:23.658 "raid_level": "raid1", 00:36:23.658 "superblock": true, 00:36:23.658 "num_base_bdevs": 2, 00:36:23.658 "num_base_bdevs_discovered": 2, 00:36:23.658 "num_base_bdevs_operational": 2, 00:36:23.658 "base_bdevs_list": [ 00:36:23.658 { 00:36:23.658 "name": "spare", 00:36:23.658 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:23.658 "is_configured": true, 00:36:23.658 "data_offset": 256, 00:36:23.658 "data_size": 7936 00:36:23.658 }, 00:36:23.658 { 00:36:23.658 "name": "BaseBdev2", 00:36:23.658 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:23.658 "is_configured": true, 00:36:23.658 "data_offset": 256, 00:36:23.658 "data_size": 7936 00:36:23.658 } 00:36:23.658 ] 00:36:23.658 }' 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:23.658 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:24.224 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:24.224 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:24.224 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:24.224 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:24.224 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:24.224 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.224 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.482 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:24.482 "name": "raid_bdev1", 00:36:24.482 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:24.482 "strip_size_kb": 0, 00:36:24.482 "state": "online", 00:36:24.482 "raid_level": "raid1", 00:36:24.482 "superblock": true, 00:36:24.482 "num_base_bdevs": 2, 00:36:24.482 "num_base_bdevs_discovered": 2, 00:36:24.482 "num_base_bdevs_operational": 2, 00:36:24.482 "base_bdevs_list": [ 00:36:24.482 { 00:36:24.482 "name": "spare", 00:36:24.482 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:24.482 "is_configured": true, 00:36:24.482 "data_offset": 256, 00:36:24.482 "data_size": 7936 00:36:24.482 }, 00:36:24.482 { 00:36:24.482 "name": "BaseBdev2", 00:36:24.482 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:24.482 "is_configured": true, 00:36:24.482 "data_offset": 256, 00:36:24.482 "data_size": 7936 00:36:24.482 } 00:36:24.482 ] 00:36:24.482 }' 00:36:24.482 19:03:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:24.482 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:24.482 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:24.741 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:24.741 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.741 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:25.000 [2024-07-25 19:03:25.505296] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.000 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:25.259 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:25.259 "name": "raid_bdev1", 00:36:25.259 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:25.259 "strip_size_kb": 0, 00:36:25.259 "state": "online", 00:36:25.259 "raid_level": "raid1", 00:36:25.259 "superblock": true, 00:36:25.259 "num_base_bdevs": 2, 00:36:25.259 "num_base_bdevs_discovered": 1, 00:36:25.259 "num_base_bdevs_operational": 1, 00:36:25.259 "base_bdevs_list": [ 00:36:25.259 { 00:36:25.259 "name": null, 00:36:25.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.259 "is_configured": false, 00:36:25.259 "data_offset": 256, 00:36:25.259 "data_size": 7936 00:36:25.259 }, 00:36:25.259 { 00:36:25.259 "name": "BaseBdev2", 00:36:25.259 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:25.259 "is_configured": true, 00:36:25.259 "data_offset": 256, 00:36:25.259 "data_size": 7936 00:36:25.259 } 00:36:25.259 ] 00:36:25.259 }' 00:36:25.259 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:25.259 19:03:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:25.826 19:03:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:26.086 [2024-07-25 19:03:26.513513] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:26.086 [2024-07-25 19:03:26.513745] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:26.086 [2024-07-25 19:03:26.513758] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:26.086 [2024-07-25 19:03:26.513821] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:26.086 [2024-07-25 19:03:26.531650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:36:26.086 [2024-07-25 19:03:26.534148] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:26.086 19:03:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:36:27.024 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:27.024 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:27.024 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:27.024 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:27.024 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:27.024 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.024 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.284 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:27.284 "name": "raid_bdev1", 00:36:27.284 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:27.284 "strip_size_kb": 0, 00:36:27.284 "state": "online", 00:36:27.284 "raid_level": "raid1", 00:36:27.284 "superblock": true, 00:36:27.284 "num_base_bdevs": 2, 00:36:27.284 "num_base_bdevs_discovered": 2, 00:36:27.284 "num_base_bdevs_operational": 2, 00:36:27.284 "process": { 00:36:27.284 "type": "rebuild", 00:36:27.284 "target": "spare", 00:36:27.284 "progress": { 00:36:27.284 "blocks": 3072, 00:36:27.284 "percent": 38 00:36:27.284 } 00:36:27.284 }, 00:36:27.284 "base_bdevs_list": [ 00:36:27.284 { 00:36:27.284 "name": "spare", 00:36:27.284 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:27.284 "is_configured": true, 00:36:27.284 "data_offset": 256, 00:36:27.284 "data_size": 7936 00:36:27.284 }, 00:36:27.284 { 00:36:27.284 "name": "BaseBdev2", 00:36:27.284 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:27.284 "is_configured": true, 00:36:27.284 "data_offset": 256, 00:36:27.284 "data_size": 7936 00:36:27.284 } 00:36:27.284 ] 00:36:27.284 }' 00:36:27.284 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:27.284 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:27.284 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:27.543 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:27.543 19:03:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:27.543 [2024-07-25 19:03:28.107510] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:27.803 [2024-07-25 19:03:28.146157] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:27.803 [2024-07-25 19:03:28.146366] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:27.803 [2024-07-25 19:03:28.146416] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:27.803 [2024-07-25 19:03:28.146487] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.803 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:28.063 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:28.063 "name": "raid_bdev1", 00:36:28.063 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:28.063 "strip_size_kb": 0, 00:36:28.063 "state": "online", 00:36:28.063 "raid_level": "raid1", 00:36:28.063 "superblock": true, 00:36:28.063 "num_base_bdevs": 2, 00:36:28.063 "num_base_bdevs_discovered": 1, 00:36:28.063 "num_base_bdevs_operational": 1, 00:36:28.063 "base_bdevs_list": [ 00:36:28.063 { 00:36:28.063 "name": null, 00:36:28.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.063 "is_configured": false, 00:36:28.063 "data_offset": 256, 00:36:28.063 "data_size": 7936 00:36:28.063 }, 00:36:28.063 { 00:36:28.063 "name": "BaseBdev2", 00:36:28.063 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:28.063 "is_configured": true, 00:36:28.063 "data_offset": 256, 00:36:28.063 "data_size": 7936 00:36:28.063 } 00:36:28.063 ] 00:36:28.063 }' 00:36:28.063 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:28.063 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:28.631 19:03:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:28.891 [2024-07-25 19:03:29.216583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:28.891 [2024-07-25 19:03:29.216815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:28.891 [2024-07-25 19:03:29.216883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:36:28.891 [2024-07-25 19:03:29.217113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:28.891 [2024-07-25 19:03:29.217723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:28.891 [2024-07-25 19:03:29.217877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:28.891 [2024-07-25 19:03:29.218085] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:28.891 [2024-07-25 19:03:29.218177] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:28.891 [2024-07-25 19:03:29.218248] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:28.891 [2024-07-25 19:03:29.218320] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:28.891 [2024-07-25 19:03:29.235882] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:36:28.891 spare 00:36:28.891 [2024-07-25 19:03:29.238045] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:28.891 19:03:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:36:29.828 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:29.828 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:29.828 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:29.828 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:29.828 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:29.828 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.828 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.088 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:30.088 "name": "raid_bdev1", 00:36:30.088 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:30.088 "strip_size_kb": 0, 00:36:30.088 "state": "online", 00:36:30.088 "raid_level": "raid1", 00:36:30.088 "superblock": true, 00:36:30.088 "num_base_bdevs": 2, 00:36:30.088 "num_base_bdevs_discovered": 2, 00:36:30.088 "num_base_bdevs_operational": 2, 00:36:30.088 "process": { 00:36:30.088 "type": "rebuild", 00:36:30.088 "target": "spare", 00:36:30.088 "progress": { 00:36:30.088 "blocks": 3072, 00:36:30.088 "percent": 38 00:36:30.088 } 00:36:30.088 }, 00:36:30.088 "base_bdevs_list": [ 00:36:30.088 { 00:36:30.088 "name": "spare", 00:36:30.088 "uuid": "c92c64e1-6e56-5e1d-a111-58533f6febd6", 00:36:30.088 "is_configured": true, 00:36:30.088 "data_offset": 256, 00:36:30.088 "data_size": 7936 00:36:30.088 }, 00:36:30.088 { 00:36:30.088 "name": "BaseBdev2", 00:36:30.088 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:30.088 "is_configured": true, 00:36:30.088 "data_offset": 256, 00:36:30.088 "data_size": 7936 00:36:30.088 } 00:36:30.088 ] 00:36:30.088 }' 00:36:30.088 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:30.088 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:30.088 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:30.088 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:30.088 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:30.347 [2024-07-25 19:03:30.788265] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:30.347 [2024-07-25 19:03:30.849620] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:30.347 [2024-07-25 19:03:30.849830] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:30.347 [2024-07-25 19:03:30.849878] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:30.347 [2024-07-25 19:03:30.849993] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.347 19:03:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.606 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:30.606 "name": "raid_bdev1", 00:36:30.606 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:30.606 "strip_size_kb": 0, 00:36:30.606 "state": "online", 00:36:30.606 "raid_level": "raid1", 00:36:30.606 "superblock": true, 00:36:30.606 "num_base_bdevs": 2, 00:36:30.606 "num_base_bdevs_discovered": 1, 00:36:30.606 "num_base_bdevs_operational": 1, 00:36:30.606 "base_bdevs_list": [ 00:36:30.606 { 00:36:30.606 "name": null, 00:36:30.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.606 "is_configured": false, 00:36:30.606 "data_offset": 256, 00:36:30.606 "data_size": 7936 00:36:30.606 }, 00:36:30.606 { 00:36:30.606 "name": "BaseBdev2", 00:36:30.606 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:30.606 "is_configured": true, 00:36:30.606 "data_offset": 256, 00:36:30.606 "data_size": 7936 00:36:30.606 } 00:36:30.606 ] 00:36:30.606 }' 00:36:30.606 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:30.606 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:31.174 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:31.174 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:31.174 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:31.174 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:31.174 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:31.174 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.174 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.433 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:31.433 "name": "raid_bdev1", 00:36:31.433 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:31.433 "strip_size_kb": 0, 00:36:31.433 "state": "online", 00:36:31.433 "raid_level": "raid1", 00:36:31.433 "superblock": true, 00:36:31.433 "num_base_bdevs": 2, 00:36:31.433 "num_base_bdevs_discovered": 1, 00:36:31.433 "num_base_bdevs_operational": 1, 00:36:31.433 "base_bdevs_list": [ 00:36:31.433 { 00:36:31.433 "name": null, 00:36:31.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.433 "is_configured": false, 00:36:31.433 "data_offset": 256, 00:36:31.433 "data_size": 7936 00:36:31.433 }, 00:36:31.433 { 00:36:31.433 "name": "BaseBdev2", 00:36:31.433 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:31.433 "is_configured": true, 00:36:31.433 "data_offset": 256, 00:36:31.433 "data_size": 7936 00:36:31.433 } 00:36:31.433 ] 00:36:31.433 }' 00:36:31.433 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:31.433 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:31.433 19:03:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:31.691 19:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:31.691 19:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:31.949 19:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:31.949 [2024-07-25 19:03:32.451724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:31.949 [2024-07-25 19:03:32.451979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:31.949 [2024-07-25 19:03:32.452096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:31.949 [2024-07-25 19:03:32.452194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:31.949 [2024-07-25 19:03:32.452742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:31.949 [2024-07-25 19:03:32.452881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:31.949 [2024-07-25 19:03:32.453113] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:31.949 [2024-07-25 19:03:32.453219] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:31.949 [2024-07-25 19:03:32.453287] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:31.949 BaseBdev1 00:36:31.949 19:03:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:36:33.326 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:33.326 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:33.326 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:33.326 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:33.327 "name": "raid_bdev1", 00:36:33.327 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:33.327 "strip_size_kb": 0, 00:36:33.327 "state": "online", 00:36:33.327 "raid_level": "raid1", 00:36:33.327 "superblock": true, 00:36:33.327 "num_base_bdevs": 2, 00:36:33.327 "num_base_bdevs_discovered": 1, 00:36:33.327 "num_base_bdevs_operational": 1, 00:36:33.327 "base_bdevs_list": [ 00:36:33.327 { 00:36:33.327 "name": null, 00:36:33.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:33.327 "is_configured": false, 00:36:33.327 "data_offset": 256, 00:36:33.327 "data_size": 7936 00:36:33.327 }, 00:36:33.327 { 00:36:33.327 "name": "BaseBdev2", 00:36:33.327 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:33.327 "is_configured": true, 00:36:33.327 "data_offset": 256, 00:36:33.327 "data_size": 7936 00:36:33.327 } 00:36:33.327 ] 00:36:33.327 }' 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:33.327 19:03:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:33.894 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:33.894 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:33.894 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:33.894 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:33.894 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:33.894 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.894 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.153 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:34.153 "name": "raid_bdev1", 00:36:34.153 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:34.153 "strip_size_kb": 0, 00:36:34.153 "state": "online", 00:36:34.153 "raid_level": "raid1", 00:36:34.153 "superblock": true, 00:36:34.153 "num_base_bdevs": 2, 00:36:34.153 "num_base_bdevs_discovered": 1, 00:36:34.153 "num_base_bdevs_operational": 1, 00:36:34.154 "base_bdevs_list": [ 00:36:34.154 { 00:36:34.154 "name": null, 00:36:34.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.154 "is_configured": false, 00:36:34.154 "data_offset": 256, 00:36:34.154 "data_size": 7936 00:36:34.154 }, 00:36:34.154 { 00:36:34.154 "name": "BaseBdev2", 00:36:34.154 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:34.154 "is_configured": true, 00:36:34.154 "data_offset": 256, 00:36:34.154 "data_size": 7936 00:36:34.154 } 00:36:34.154 ] 00:36:34.154 }' 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:34.154 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:34.452 [2024-07-25 19:03:34.838557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:34.452 [2024-07-25 19:03:34.838873] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:34.452 [2024-07-25 19:03:34.838972] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:34.452 request: 00:36:34.452 { 00:36:34.452 "base_bdev": "BaseBdev1", 00:36:34.452 "raid_bdev": "raid_bdev1", 00:36:34.452 "method": "bdev_raid_add_base_bdev", 00:36:34.452 "req_id": 1 00:36:34.452 } 00:36:34.452 Got JSON-RPC error response 00:36:34.452 response: 00:36:34.452 { 00:36:34.452 "code": -22, 00:36:34.452 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:34.452 } 00:36:34.452 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:36:34.452 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:34.452 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:34.452 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:34.452 19:03:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.414 19:03:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.672 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:35.672 "name": "raid_bdev1", 00:36:35.672 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:35.672 "strip_size_kb": 0, 00:36:35.672 "state": "online", 00:36:35.672 "raid_level": "raid1", 00:36:35.672 "superblock": true, 00:36:35.672 "num_base_bdevs": 2, 00:36:35.672 "num_base_bdevs_discovered": 1, 00:36:35.672 "num_base_bdevs_operational": 1, 00:36:35.672 "base_bdevs_list": [ 00:36:35.672 { 00:36:35.672 "name": null, 00:36:35.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:35.672 "is_configured": false, 00:36:35.672 "data_offset": 256, 00:36:35.672 "data_size": 7936 00:36:35.672 }, 00:36:35.672 { 00:36:35.672 "name": "BaseBdev2", 00:36:35.672 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:35.672 "is_configured": true, 00:36:35.672 "data_offset": 256, 00:36:35.672 "data_size": 7936 00:36:35.672 } 00:36:35.672 ] 00:36:35.672 }' 00:36:35.672 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:35.672 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.240 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:36.240 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:36.240 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:36.240 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:36.240 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:36.240 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.240 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:36.499 "name": "raid_bdev1", 00:36:36.499 "uuid": "1f5349b6-6e46-4174-96d3-5a8a5673735d", 00:36:36.499 "strip_size_kb": 0, 00:36:36.499 "state": "online", 00:36:36.499 "raid_level": "raid1", 00:36:36.499 "superblock": true, 00:36:36.499 "num_base_bdevs": 2, 00:36:36.499 "num_base_bdevs_discovered": 1, 00:36:36.499 "num_base_bdevs_operational": 1, 00:36:36.499 "base_bdevs_list": [ 00:36:36.499 { 00:36:36.499 "name": null, 00:36:36.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:36.499 "is_configured": false, 00:36:36.499 "data_offset": 256, 00:36:36.499 "data_size": 7936 00:36:36.499 }, 00:36:36.499 { 00:36:36.499 "name": "BaseBdev2", 00:36:36.499 "uuid": "893022c0-42d9-5cf4-a67d-5c28d55598fd", 00:36:36.499 "is_configured": true, 00:36:36.499 "data_offset": 256, 00:36:36.499 "data_size": 7936 00:36:36.499 } 00:36:36.499 ] 00:36:36.499 }' 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 158722 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 158722 ']' 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 158722 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 158722 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:36.499 killing process with pid 158722 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 158722' 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 158722 00:36:36.499 Received shutdown signal, test time was about 60.000000 seconds 00:36:36.499 00:36:36.499 Latency(us) 00:36:36.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.499 =================================================================================================================== 00:36:36.499 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:36.499 19:03:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 158722 00:36:36.499 [2024-07-25 19:03:36.973546] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:36.500 [2024-07-25 19:03:36.973700] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:36.500 [2024-07-25 19:03:36.973790] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:36.500 [2024-07-25 19:03:36.973941] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:36:36.759 [2024-07-25 19:03:37.298316] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:38.664 ************************************ 00:36:38.664 END TEST raid_rebuild_test_sb_4k 00:36:38.664 ************************************ 00:36:38.664 19:03:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:36:38.664 00:36:38.664 real 0m31.349s 00:36:38.664 user 0m47.388s 00:36:38.664 sys 0m4.737s 00:36:38.664 19:03:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:38.664 19:03:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:38.664 19:03:38 bdev_raid -- bdev/bdev_raid.sh@984 -- # base_malloc_params='-m 32' 00:36:38.664 19:03:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:36:38.664 19:03:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:36:38.664 19:03:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:38.664 19:03:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:38.664 ************************************ 00:36:38.664 START TEST raid_state_function_test_sb_md_separate 00:36:38.664 ************************************ 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:38.664 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=159598 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 159598' 00:36:38.665 Process raid pid: 159598 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 159598 /var/tmp/spdk-raid.sock 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 159598 ']' 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:38.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:38.665 19:03:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:38.665 [2024-07-25 19:03:38.912332] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:38.665 [2024-07-25 19:03:38.912796] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.665 [2024-07-25 19:03:39.096438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.924 [2024-07-25 19:03:39.305457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.924 [2024-07-25 19:03:39.498445] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:39.493 [2024-07-25 19:03:39.985354] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:39.493 [2024-07-25 19:03:39.985574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:39.493 [2024-07-25 19:03:39.985661] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:39.493 [2024-07-25 19:03:39.985723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.493 19:03:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:39.752 19:03:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:39.752 "name": "Existed_Raid", 00:36:39.752 "uuid": "93b8817c-b2ff-4f60-a9c4-f127278b7fcc", 00:36:39.752 "strip_size_kb": 0, 00:36:39.753 "state": "configuring", 00:36:39.753 "raid_level": "raid1", 00:36:39.753 "superblock": true, 00:36:39.753 "num_base_bdevs": 2, 00:36:39.753 "num_base_bdevs_discovered": 0, 00:36:39.753 "num_base_bdevs_operational": 2, 00:36:39.753 "base_bdevs_list": [ 00:36:39.753 { 00:36:39.753 "name": "BaseBdev1", 00:36:39.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.753 "is_configured": false, 00:36:39.753 "data_offset": 0, 00:36:39.753 "data_size": 0 00:36:39.753 }, 00:36:39.753 { 00:36:39.753 "name": "BaseBdev2", 00:36:39.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:39.753 "is_configured": false, 00:36:39.753 "data_offset": 0, 00:36:39.753 "data_size": 0 00:36:39.753 } 00:36:39.753 ] 00:36:39.753 }' 00:36:39.753 19:03:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:39.753 19:03:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:40.321 19:03:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:40.580 [2024-07-25 19:03:40.933363] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:40.580 [2024-07-25 19:03:40.933517] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:36:40.580 19:03:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:40.580 [2024-07-25 19:03:41.101430] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:40.580 [2024-07-25 19:03:41.101578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:40.580 [2024-07-25 19:03:41.101697] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:40.580 [2024-07-25 19:03:41.101813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:40.580 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:36:40.838 [2024-07-25 19:03:41.310090] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:40.838 BaseBdev1 00:36:40.838 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:40.838 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:36:40.838 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:40.838 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:36:40.838 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:40.838 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:40.838 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:41.097 [ 00:36:41.097 { 00:36:41.097 "name": "BaseBdev1", 00:36:41.097 "aliases": [ 00:36:41.097 "22f32567-8295-4170-8423-3009a7c1c31b" 00:36:41.097 ], 00:36:41.097 "product_name": "Malloc disk", 00:36:41.097 "block_size": 4096, 00:36:41.097 "num_blocks": 8192, 00:36:41.097 "uuid": "22f32567-8295-4170-8423-3009a7c1c31b", 00:36:41.097 "md_size": 32, 00:36:41.097 "md_interleave": false, 00:36:41.097 "dif_type": 0, 00:36:41.097 "assigned_rate_limits": { 00:36:41.097 "rw_ios_per_sec": 0, 00:36:41.097 "rw_mbytes_per_sec": 0, 00:36:41.097 "r_mbytes_per_sec": 0, 00:36:41.097 "w_mbytes_per_sec": 0 00:36:41.097 }, 00:36:41.097 "claimed": true, 00:36:41.097 "claim_type": "exclusive_write", 00:36:41.097 "zoned": false, 00:36:41.097 "supported_io_types": { 00:36:41.097 "read": true, 00:36:41.097 "write": true, 00:36:41.097 "unmap": true, 00:36:41.097 "flush": true, 00:36:41.097 "reset": true, 00:36:41.097 "nvme_admin": false, 00:36:41.097 "nvme_io": false, 00:36:41.097 "nvme_io_md": false, 00:36:41.097 "write_zeroes": true, 00:36:41.097 "zcopy": true, 00:36:41.097 "get_zone_info": false, 00:36:41.097 "zone_management": false, 00:36:41.097 "zone_append": false, 00:36:41.097 "compare": false, 00:36:41.097 "compare_and_write": false, 00:36:41.097 "abort": true, 00:36:41.097 "seek_hole": false, 00:36:41.097 "seek_data": false, 00:36:41.097 "copy": true, 00:36:41.097 "nvme_iov_md": false 00:36:41.097 }, 00:36:41.097 "memory_domains": [ 00:36:41.097 { 00:36:41.097 "dma_device_id": "system", 00:36:41.097 "dma_device_type": 1 00:36:41.097 }, 00:36:41.097 { 00:36:41.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:41.097 "dma_device_type": 2 00:36:41.097 } 00:36:41.097 ], 00:36:41.097 "driver_specific": {} 00:36:41.097 } 00:36:41.097 ] 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:41.097 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:41.098 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:41.098 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:41.098 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:41.098 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:41.098 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:41.098 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:41.356 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:41.356 "name": "Existed_Raid", 00:36:41.356 "uuid": "90fd6741-2351-493d-9fa7-f6bfeb5d01f9", 00:36:41.356 "strip_size_kb": 0, 00:36:41.356 "state": "configuring", 00:36:41.356 "raid_level": "raid1", 00:36:41.356 "superblock": true, 00:36:41.356 "num_base_bdevs": 2, 00:36:41.356 "num_base_bdevs_discovered": 1, 00:36:41.356 "num_base_bdevs_operational": 2, 00:36:41.356 "base_bdevs_list": [ 00:36:41.356 { 00:36:41.356 "name": "BaseBdev1", 00:36:41.356 "uuid": "22f32567-8295-4170-8423-3009a7c1c31b", 00:36:41.356 "is_configured": true, 00:36:41.356 "data_offset": 256, 00:36:41.356 "data_size": 7936 00:36:41.356 }, 00:36:41.356 { 00:36:41.356 "name": "BaseBdev2", 00:36:41.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.356 "is_configured": false, 00:36:41.356 "data_offset": 0, 00:36:41.356 "data_size": 0 00:36:41.356 } 00:36:41.356 ] 00:36:41.356 }' 00:36:41.356 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:41.356 19:03:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:41.923 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:42.182 [2024-07-25 19:03:42.582310] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:42.182 [2024-07-25 19:03:42.582474] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:36:42.182 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:42.182 [2024-07-25 19:03:42.754411] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:42.182 [2024-07-25 19:03:42.756849] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:42.182 [2024-07-25 19:03:42.757021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:42.441 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:42.441 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:42.441 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:42.442 19:03:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:42.706 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:42.706 "name": "Existed_Raid", 00:36:42.706 "uuid": "b946c800-2e8d-4d80-8c92-06d5d7ed2558", 00:36:42.706 "strip_size_kb": 0, 00:36:42.706 "state": "configuring", 00:36:42.706 "raid_level": "raid1", 00:36:42.706 "superblock": true, 00:36:42.706 "num_base_bdevs": 2, 00:36:42.706 "num_base_bdevs_discovered": 1, 00:36:42.706 "num_base_bdevs_operational": 2, 00:36:42.706 "base_bdevs_list": [ 00:36:42.706 { 00:36:42.706 "name": "BaseBdev1", 00:36:42.706 "uuid": "22f32567-8295-4170-8423-3009a7c1c31b", 00:36:42.706 "is_configured": true, 00:36:42.706 "data_offset": 256, 00:36:42.706 "data_size": 7936 00:36:42.706 }, 00:36:42.706 { 00:36:42.706 "name": "BaseBdev2", 00:36:42.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:42.706 "is_configured": false, 00:36:42.706 "data_offset": 0, 00:36:42.706 "data_size": 0 00:36:42.706 } 00:36:42.706 ] 00:36:42.706 }' 00:36:42.706 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:42.706 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:43.273 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:36:43.532 [2024-07-25 19:03:43.927096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:43.532 [2024-07-25 19:03:43.927287] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:36:43.532 [2024-07-25 19:03:43.927298] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:43.532 [2024-07-25 19:03:43.927397] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:36:43.532 [2024-07-25 19:03:43.927512] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:36:43.532 [2024-07-25 19:03:43.927521] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:36:43.532 [2024-07-25 19:03:43.927617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:43.532 BaseBdev2 00:36:43.532 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:43.532 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:36:43.532 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:43.532 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:36:43.532 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:43.532 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:43.532 19:03:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:43.790 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:44.049 [ 00:36:44.049 { 00:36:44.049 "name": "BaseBdev2", 00:36:44.049 "aliases": [ 00:36:44.049 "51912406-3284-4639-a67f-fc76a0337d8e" 00:36:44.049 ], 00:36:44.049 "product_name": "Malloc disk", 00:36:44.049 "block_size": 4096, 00:36:44.049 "num_blocks": 8192, 00:36:44.049 "uuid": "51912406-3284-4639-a67f-fc76a0337d8e", 00:36:44.049 "md_size": 32, 00:36:44.049 "md_interleave": false, 00:36:44.049 "dif_type": 0, 00:36:44.049 "assigned_rate_limits": { 00:36:44.049 "rw_ios_per_sec": 0, 00:36:44.049 "rw_mbytes_per_sec": 0, 00:36:44.049 "r_mbytes_per_sec": 0, 00:36:44.049 "w_mbytes_per_sec": 0 00:36:44.049 }, 00:36:44.049 "claimed": true, 00:36:44.049 "claim_type": "exclusive_write", 00:36:44.049 "zoned": false, 00:36:44.049 "supported_io_types": { 00:36:44.049 "read": true, 00:36:44.049 "write": true, 00:36:44.049 "unmap": true, 00:36:44.049 "flush": true, 00:36:44.049 "reset": true, 00:36:44.049 "nvme_admin": false, 00:36:44.049 "nvme_io": false, 00:36:44.049 "nvme_io_md": false, 00:36:44.049 "write_zeroes": true, 00:36:44.049 "zcopy": true, 00:36:44.049 "get_zone_info": false, 00:36:44.049 "zone_management": false, 00:36:44.049 "zone_append": false, 00:36:44.049 "compare": false, 00:36:44.049 "compare_and_write": false, 00:36:44.049 "abort": true, 00:36:44.049 "seek_hole": false, 00:36:44.049 "seek_data": false, 00:36:44.049 "copy": true, 00:36:44.049 "nvme_iov_md": false 00:36:44.049 }, 00:36:44.049 "memory_domains": [ 00:36:44.049 { 00:36:44.049 "dma_device_id": "system", 00:36:44.049 "dma_device_type": 1 00:36:44.049 }, 00:36:44.049 { 00:36:44.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:44.049 "dma_device_type": 2 00:36:44.049 } 00:36:44.049 ], 00:36:44.049 "driver_specific": {} 00:36:44.049 } 00:36:44.049 ] 00:36:44.049 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:36:44.049 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:44.049 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:44.049 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:44.049 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:44.049 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:44.049 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:44.050 "name": "Existed_Raid", 00:36:44.050 "uuid": "b946c800-2e8d-4d80-8c92-06d5d7ed2558", 00:36:44.050 "strip_size_kb": 0, 00:36:44.050 "state": "online", 00:36:44.050 "raid_level": "raid1", 00:36:44.050 "superblock": true, 00:36:44.050 "num_base_bdevs": 2, 00:36:44.050 "num_base_bdevs_discovered": 2, 00:36:44.050 "num_base_bdevs_operational": 2, 00:36:44.050 "base_bdevs_list": [ 00:36:44.050 { 00:36:44.050 "name": "BaseBdev1", 00:36:44.050 "uuid": "22f32567-8295-4170-8423-3009a7c1c31b", 00:36:44.050 "is_configured": true, 00:36:44.050 "data_offset": 256, 00:36:44.050 "data_size": 7936 00:36:44.050 }, 00:36:44.050 { 00:36:44.050 "name": "BaseBdev2", 00:36:44.050 "uuid": "51912406-3284-4639-a67f-fc76a0337d8e", 00:36:44.050 "is_configured": true, 00:36:44.050 "data_offset": 256, 00:36:44.050 "data_size": 7936 00:36:44.050 } 00:36:44.050 ] 00:36:44.050 }' 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:44.050 19:03:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:44.617 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:44.877 [2024-07-25 19:03:45.307494] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:44.877 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:44.877 "name": "Existed_Raid", 00:36:44.877 "aliases": [ 00:36:44.877 "b946c800-2e8d-4d80-8c92-06d5d7ed2558" 00:36:44.877 ], 00:36:44.877 "product_name": "Raid Volume", 00:36:44.877 "block_size": 4096, 00:36:44.877 "num_blocks": 7936, 00:36:44.877 "uuid": "b946c800-2e8d-4d80-8c92-06d5d7ed2558", 00:36:44.877 "md_size": 32, 00:36:44.877 "md_interleave": false, 00:36:44.877 "dif_type": 0, 00:36:44.877 "assigned_rate_limits": { 00:36:44.877 "rw_ios_per_sec": 0, 00:36:44.877 "rw_mbytes_per_sec": 0, 00:36:44.877 "r_mbytes_per_sec": 0, 00:36:44.877 "w_mbytes_per_sec": 0 00:36:44.877 }, 00:36:44.877 "claimed": false, 00:36:44.877 "zoned": false, 00:36:44.877 "supported_io_types": { 00:36:44.877 "read": true, 00:36:44.877 "write": true, 00:36:44.877 "unmap": false, 00:36:44.877 "flush": false, 00:36:44.877 "reset": true, 00:36:44.877 "nvme_admin": false, 00:36:44.877 "nvme_io": false, 00:36:44.877 "nvme_io_md": false, 00:36:44.877 "write_zeroes": true, 00:36:44.877 "zcopy": false, 00:36:44.877 "get_zone_info": false, 00:36:44.877 "zone_management": false, 00:36:44.877 "zone_append": false, 00:36:44.877 "compare": false, 00:36:44.877 "compare_and_write": false, 00:36:44.877 "abort": false, 00:36:44.877 "seek_hole": false, 00:36:44.877 "seek_data": false, 00:36:44.877 "copy": false, 00:36:44.877 "nvme_iov_md": false 00:36:44.877 }, 00:36:44.877 "memory_domains": [ 00:36:44.877 { 00:36:44.877 "dma_device_id": "system", 00:36:44.877 "dma_device_type": 1 00:36:44.877 }, 00:36:44.877 { 00:36:44.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:44.877 "dma_device_type": 2 00:36:44.877 }, 00:36:44.877 { 00:36:44.877 "dma_device_id": "system", 00:36:44.877 "dma_device_type": 1 00:36:44.877 }, 00:36:44.877 { 00:36:44.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:44.877 "dma_device_type": 2 00:36:44.877 } 00:36:44.877 ], 00:36:44.877 "driver_specific": { 00:36:44.877 "raid": { 00:36:44.877 "uuid": "b946c800-2e8d-4d80-8c92-06d5d7ed2558", 00:36:44.877 "strip_size_kb": 0, 00:36:44.877 "state": "online", 00:36:44.877 "raid_level": "raid1", 00:36:44.877 "superblock": true, 00:36:44.877 "num_base_bdevs": 2, 00:36:44.877 "num_base_bdevs_discovered": 2, 00:36:44.877 "num_base_bdevs_operational": 2, 00:36:44.877 "base_bdevs_list": [ 00:36:44.877 { 00:36:44.877 "name": "BaseBdev1", 00:36:44.877 "uuid": "22f32567-8295-4170-8423-3009a7c1c31b", 00:36:44.877 "is_configured": true, 00:36:44.877 "data_offset": 256, 00:36:44.877 "data_size": 7936 00:36:44.877 }, 00:36:44.877 { 00:36:44.877 "name": "BaseBdev2", 00:36:44.877 "uuid": "51912406-3284-4639-a67f-fc76a0337d8e", 00:36:44.877 "is_configured": true, 00:36:44.877 "data_offset": 256, 00:36:44.877 "data_size": 7936 00:36:44.877 } 00:36:44.877 ] 00:36:44.877 } 00:36:44.877 } 00:36:44.877 }' 00:36:44.877 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:44.877 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:44.877 BaseBdev2' 00:36:44.877 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:44.877 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:44.877 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:45.135 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:45.135 "name": "BaseBdev1", 00:36:45.135 "aliases": [ 00:36:45.135 "22f32567-8295-4170-8423-3009a7c1c31b" 00:36:45.135 ], 00:36:45.135 "product_name": "Malloc disk", 00:36:45.135 "block_size": 4096, 00:36:45.135 "num_blocks": 8192, 00:36:45.135 "uuid": "22f32567-8295-4170-8423-3009a7c1c31b", 00:36:45.135 "md_size": 32, 00:36:45.135 "md_interleave": false, 00:36:45.135 "dif_type": 0, 00:36:45.135 "assigned_rate_limits": { 00:36:45.135 "rw_ios_per_sec": 0, 00:36:45.135 "rw_mbytes_per_sec": 0, 00:36:45.135 "r_mbytes_per_sec": 0, 00:36:45.135 "w_mbytes_per_sec": 0 00:36:45.135 }, 00:36:45.135 "claimed": true, 00:36:45.135 "claim_type": "exclusive_write", 00:36:45.135 "zoned": false, 00:36:45.135 "supported_io_types": { 00:36:45.136 "read": true, 00:36:45.136 "write": true, 00:36:45.136 "unmap": true, 00:36:45.136 "flush": true, 00:36:45.136 "reset": true, 00:36:45.136 "nvme_admin": false, 00:36:45.136 "nvme_io": false, 00:36:45.136 "nvme_io_md": false, 00:36:45.136 "write_zeroes": true, 00:36:45.136 "zcopy": true, 00:36:45.136 "get_zone_info": false, 00:36:45.136 "zone_management": false, 00:36:45.136 "zone_append": false, 00:36:45.136 "compare": false, 00:36:45.136 "compare_and_write": false, 00:36:45.136 "abort": true, 00:36:45.136 "seek_hole": false, 00:36:45.136 "seek_data": false, 00:36:45.136 "copy": true, 00:36:45.136 "nvme_iov_md": false 00:36:45.136 }, 00:36:45.136 "memory_domains": [ 00:36:45.136 { 00:36:45.136 "dma_device_id": "system", 00:36:45.136 "dma_device_type": 1 00:36:45.136 }, 00:36:45.136 { 00:36:45.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:45.136 "dma_device_type": 2 00:36:45.136 } 00:36:45.136 ], 00:36:45.136 "driver_specific": {} 00:36:45.136 }' 00:36:45.136 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:45.136 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:45.136 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:45.136 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:45.136 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:45.136 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:45.136 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:45.394 19:03:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:45.652 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:45.652 "name": "BaseBdev2", 00:36:45.652 "aliases": [ 00:36:45.652 "51912406-3284-4639-a67f-fc76a0337d8e" 00:36:45.652 ], 00:36:45.652 "product_name": "Malloc disk", 00:36:45.652 "block_size": 4096, 00:36:45.652 "num_blocks": 8192, 00:36:45.652 "uuid": "51912406-3284-4639-a67f-fc76a0337d8e", 00:36:45.652 "md_size": 32, 00:36:45.652 "md_interleave": false, 00:36:45.653 "dif_type": 0, 00:36:45.653 "assigned_rate_limits": { 00:36:45.653 "rw_ios_per_sec": 0, 00:36:45.653 "rw_mbytes_per_sec": 0, 00:36:45.653 "r_mbytes_per_sec": 0, 00:36:45.653 "w_mbytes_per_sec": 0 00:36:45.653 }, 00:36:45.653 "claimed": true, 00:36:45.653 "claim_type": "exclusive_write", 00:36:45.653 "zoned": false, 00:36:45.653 "supported_io_types": { 00:36:45.653 "read": true, 00:36:45.653 "write": true, 00:36:45.653 "unmap": true, 00:36:45.653 "flush": true, 00:36:45.653 "reset": true, 00:36:45.653 "nvme_admin": false, 00:36:45.653 "nvme_io": false, 00:36:45.653 "nvme_io_md": false, 00:36:45.653 "write_zeroes": true, 00:36:45.653 "zcopy": true, 00:36:45.653 "get_zone_info": false, 00:36:45.653 "zone_management": false, 00:36:45.653 "zone_append": false, 00:36:45.653 "compare": false, 00:36:45.653 "compare_and_write": false, 00:36:45.653 "abort": true, 00:36:45.653 "seek_hole": false, 00:36:45.653 "seek_data": false, 00:36:45.653 "copy": true, 00:36:45.653 "nvme_iov_md": false 00:36:45.653 }, 00:36:45.653 "memory_domains": [ 00:36:45.653 { 00:36:45.653 "dma_device_id": "system", 00:36:45.653 "dma_device_type": 1 00:36:45.653 }, 00:36:45.653 { 00:36:45.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:45.653 "dma_device_type": 2 00:36:45.653 } 00:36:45.653 ], 00:36:45.653 "driver_specific": {} 00:36:45.653 }' 00:36:45.653 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:45.653 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:45.653 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:45.653 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:45.912 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:45.912 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:45.912 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:45.912 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:45.912 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:45.912 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:45.912 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:46.171 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:46.171 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:46.431 [2024-07-25 19:03:46.767655] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.431 19:03:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:46.690 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:46.690 "name": "Existed_Raid", 00:36:46.690 "uuid": "b946c800-2e8d-4d80-8c92-06d5d7ed2558", 00:36:46.690 "strip_size_kb": 0, 00:36:46.690 "state": "online", 00:36:46.690 "raid_level": "raid1", 00:36:46.690 "superblock": true, 00:36:46.690 "num_base_bdevs": 2, 00:36:46.690 "num_base_bdevs_discovered": 1, 00:36:46.690 "num_base_bdevs_operational": 1, 00:36:46.690 "base_bdevs_list": [ 00:36:46.690 { 00:36:46.690 "name": null, 00:36:46.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.690 "is_configured": false, 00:36:46.690 "data_offset": 256, 00:36:46.690 "data_size": 7936 00:36:46.690 }, 00:36:46.690 { 00:36:46.690 "name": "BaseBdev2", 00:36:46.690 "uuid": "51912406-3284-4639-a67f-fc76a0337d8e", 00:36:46.690 "is_configured": true, 00:36:46.690 "data_offset": 256, 00:36:46.690 "data_size": 7936 00:36:46.690 } 00:36:46.690 ] 00:36:46.690 }' 00:36:46.690 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:46.690 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.261 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:47.261 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:47.261 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.261 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:47.520 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:47.520 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:47.520 19:03:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:47.780 [2024-07-25 19:03:48.150835] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:47.780 [2024-07-25 19:03:48.150961] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:47.780 [2024-07-25 19:03:48.237791] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:47.780 [2024-07-25 19:03:48.237843] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:47.780 [2024-07-25 19:03:48.237852] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:36:47.780 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:47.781 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:47.781 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.781 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 159598 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 159598 ']' 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 159598 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159598 00:36:48.040 killing process with pid 159598 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159598' 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 159598 00:36:48.040 19:03:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 159598 00:36:48.040 [2024-07-25 19:03:48.565211] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:48.040 [2024-07-25 19:03:48.565312] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:49.419 ************************************ 00:36:49.419 END TEST raid_state_function_test_sb_md_separate 00:36:49.419 ************************************ 00:36:49.419 19:03:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:36:49.419 00:36:49.419 real 0m10.933s 00:36:49.419 user 0m18.353s 00:36:49.419 sys 0m1.862s 00:36:49.419 19:03:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:49.419 19:03:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.419 19:03:49 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:36:49.419 19:03:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:49.419 19:03:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.419 19:03:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:49.419 ************************************ 00:36:49.419 START TEST raid_superblock_test_md_separate 00:36:49.419 ************************************ 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=159962 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 159962 /var/tmp/spdk-raid.sock 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 159962 ']' 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:49.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:49.419 19:03:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.419 [2024-07-25 19:03:49.916931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:49.419 [2024-07-25 19:03:49.917152] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159962 ] 00:36:49.678 [2024-07-25 19:03:50.096000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.937 [2024-07-25 19:03:50.295393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.937 [2024-07-25 19:03:50.486766] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:50.196 19:03:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:36:50.765 malloc1 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:50.765 [2024-07-25 19:03:51.303499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:50.765 [2024-07-25 19:03:51.303623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:50.765 [2024-07-25 19:03:51.303667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:36:50.765 [2024-07-25 19:03:51.303702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:50.765 [2024-07-25 19:03:51.306187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:50.765 [2024-07-25 19:03:51.306250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:50.765 pt1 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:50.765 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:36:51.024 malloc2 00:36:51.024 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:51.283 [2024-07-25 19:03:51.716813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:51.283 [2024-07-25 19:03:51.716926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:51.283 [2024-07-25 19:03:51.716960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:51.283 [2024-07-25 19:03:51.716980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:51.283 [2024-07-25 19:03:51.719335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:51.283 [2024-07-25 19:03:51.719380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:51.283 pt2 00:36:51.283 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:36:51.283 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:36:51.283 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:36:51.543 [2024-07-25 19:03:51.888923] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:51.543 [2024-07-25 19:03:51.891288] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:51.543 [2024-07-25 19:03:51.891462] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:36:51.543 [2024-07-25 19:03:51.891472] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:51.543 [2024-07-25 19:03:51.891601] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:36:51.543 [2024-07-25 19:03:51.891734] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:36:51.543 [2024-07-25 19:03:51.891743] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:36:51.543 [2024-07-25 19:03:51.891840] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.543 19:03:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.802 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:51.802 "name": "raid_bdev1", 00:36:51.802 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:51.802 "strip_size_kb": 0, 00:36:51.802 "state": "online", 00:36:51.802 "raid_level": "raid1", 00:36:51.802 "superblock": true, 00:36:51.802 "num_base_bdevs": 2, 00:36:51.802 "num_base_bdevs_discovered": 2, 00:36:51.802 "num_base_bdevs_operational": 2, 00:36:51.802 "base_bdevs_list": [ 00:36:51.802 { 00:36:51.802 "name": "pt1", 00:36:51.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:51.802 "is_configured": true, 00:36:51.802 "data_offset": 256, 00:36:51.802 "data_size": 7936 00:36:51.802 }, 00:36:51.802 { 00:36:51.802 "name": "pt2", 00:36:51.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:51.802 "is_configured": true, 00:36:51.802 "data_offset": 256, 00:36:51.802 "data_size": 7936 00:36:51.802 } 00:36:51.802 ] 00:36:51.802 }' 00:36:51.802 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:51.802 19:03:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:52.061 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:52.320 [2024-07-25 19:03:52.797250] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:52.320 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:52.320 "name": "raid_bdev1", 00:36:52.320 "aliases": [ 00:36:52.320 "b01b3805-63bb-49f8-81df-b4cccd4aa8c1" 00:36:52.320 ], 00:36:52.320 "product_name": "Raid Volume", 00:36:52.320 "block_size": 4096, 00:36:52.320 "num_blocks": 7936, 00:36:52.320 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:52.320 "md_size": 32, 00:36:52.320 "md_interleave": false, 00:36:52.320 "dif_type": 0, 00:36:52.320 "assigned_rate_limits": { 00:36:52.320 "rw_ios_per_sec": 0, 00:36:52.320 "rw_mbytes_per_sec": 0, 00:36:52.320 "r_mbytes_per_sec": 0, 00:36:52.320 "w_mbytes_per_sec": 0 00:36:52.320 }, 00:36:52.320 "claimed": false, 00:36:52.320 "zoned": false, 00:36:52.320 "supported_io_types": { 00:36:52.320 "read": true, 00:36:52.320 "write": true, 00:36:52.320 "unmap": false, 00:36:52.320 "flush": false, 00:36:52.320 "reset": true, 00:36:52.320 "nvme_admin": false, 00:36:52.320 "nvme_io": false, 00:36:52.320 "nvme_io_md": false, 00:36:52.320 "write_zeroes": true, 00:36:52.320 "zcopy": false, 00:36:52.320 "get_zone_info": false, 00:36:52.320 "zone_management": false, 00:36:52.320 "zone_append": false, 00:36:52.320 "compare": false, 00:36:52.320 "compare_and_write": false, 00:36:52.320 "abort": false, 00:36:52.320 "seek_hole": false, 00:36:52.320 "seek_data": false, 00:36:52.320 "copy": false, 00:36:52.320 "nvme_iov_md": false 00:36:52.320 }, 00:36:52.320 "memory_domains": [ 00:36:52.320 { 00:36:52.320 "dma_device_id": "system", 00:36:52.320 "dma_device_type": 1 00:36:52.320 }, 00:36:52.320 { 00:36:52.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.320 "dma_device_type": 2 00:36:52.320 }, 00:36:52.320 { 00:36:52.320 "dma_device_id": "system", 00:36:52.320 "dma_device_type": 1 00:36:52.320 }, 00:36:52.320 { 00:36:52.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.320 "dma_device_type": 2 00:36:52.320 } 00:36:52.320 ], 00:36:52.320 "driver_specific": { 00:36:52.320 "raid": { 00:36:52.320 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:52.320 "strip_size_kb": 0, 00:36:52.320 "state": "online", 00:36:52.320 "raid_level": "raid1", 00:36:52.320 "superblock": true, 00:36:52.320 "num_base_bdevs": 2, 00:36:52.320 "num_base_bdevs_discovered": 2, 00:36:52.320 "num_base_bdevs_operational": 2, 00:36:52.320 "base_bdevs_list": [ 00:36:52.320 { 00:36:52.320 "name": "pt1", 00:36:52.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:52.320 "is_configured": true, 00:36:52.320 "data_offset": 256, 00:36:52.320 "data_size": 7936 00:36:52.320 }, 00:36:52.320 { 00:36:52.320 "name": "pt2", 00:36:52.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:52.320 "is_configured": true, 00:36:52.320 "data_offset": 256, 00:36:52.320 "data_size": 7936 00:36:52.320 } 00:36:52.320 ] 00:36:52.320 } 00:36:52.320 } 00:36:52.320 }' 00:36:52.320 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:52.320 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:52.320 pt2' 00:36:52.320 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:52.320 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:52.320 19:03:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:52.579 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:52.580 "name": "pt1", 00:36:52.580 "aliases": [ 00:36:52.580 "00000000-0000-0000-0000-000000000001" 00:36:52.580 ], 00:36:52.580 "product_name": "passthru", 00:36:52.580 "block_size": 4096, 00:36:52.580 "num_blocks": 8192, 00:36:52.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:52.580 "md_size": 32, 00:36:52.580 "md_interleave": false, 00:36:52.580 "dif_type": 0, 00:36:52.580 "assigned_rate_limits": { 00:36:52.580 "rw_ios_per_sec": 0, 00:36:52.580 "rw_mbytes_per_sec": 0, 00:36:52.580 "r_mbytes_per_sec": 0, 00:36:52.580 "w_mbytes_per_sec": 0 00:36:52.580 }, 00:36:52.580 "claimed": true, 00:36:52.580 "claim_type": "exclusive_write", 00:36:52.580 "zoned": false, 00:36:52.580 "supported_io_types": { 00:36:52.580 "read": true, 00:36:52.580 "write": true, 00:36:52.580 "unmap": true, 00:36:52.580 "flush": true, 00:36:52.580 "reset": true, 00:36:52.580 "nvme_admin": false, 00:36:52.580 "nvme_io": false, 00:36:52.580 "nvme_io_md": false, 00:36:52.580 "write_zeroes": true, 00:36:52.580 "zcopy": true, 00:36:52.580 "get_zone_info": false, 00:36:52.580 "zone_management": false, 00:36:52.580 "zone_append": false, 00:36:52.580 "compare": false, 00:36:52.580 "compare_and_write": false, 00:36:52.580 "abort": true, 00:36:52.580 "seek_hole": false, 00:36:52.580 "seek_data": false, 00:36:52.580 "copy": true, 00:36:52.580 "nvme_iov_md": false 00:36:52.580 }, 00:36:52.580 "memory_domains": [ 00:36:52.580 { 00:36:52.580 "dma_device_id": "system", 00:36:52.580 "dma_device_type": 1 00:36:52.580 }, 00:36:52.580 { 00:36:52.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.580 "dma_device_type": 2 00:36:52.580 } 00:36:52.580 ], 00:36:52.580 "driver_specific": { 00:36:52.580 "passthru": { 00:36:52.580 "name": "pt1", 00:36:52.580 "base_bdev_name": "malloc1" 00:36:52.580 } 00:36:52.580 } 00:36:52.580 }' 00:36:52.580 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:52.580 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:52.580 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:52.580 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:52.580 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:52.839 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:53.098 "name": "pt2", 00:36:53.098 "aliases": [ 00:36:53.098 "00000000-0000-0000-0000-000000000002" 00:36:53.098 ], 00:36:53.098 "product_name": "passthru", 00:36:53.098 "block_size": 4096, 00:36:53.098 "num_blocks": 8192, 00:36:53.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:53.098 "md_size": 32, 00:36:53.098 "md_interleave": false, 00:36:53.098 "dif_type": 0, 00:36:53.098 "assigned_rate_limits": { 00:36:53.098 "rw_ios_per_sec": 0, 00:36:53.098 "rw_mbytes_per_sec": 0, 00:36:53.098 "r_mbytes_per_sec": 0, 00:36:53.098 "w_mbytes_per_sec": 0 00:36:53.098 }, 00:36:53.098 "claimed": true, 00:36:53.098 "claim_type": "exclusive_write", 00:36:53.098 "zoned": false, 00:36:53.098 "supported_io_types": { 00:36:53.098 "read": true, 00:36:53.098 "write": true, 00:36:53.098 "unmap": true, 00:36:53.098 "flush": true, 00:36:53.098 "reset": true, 00:36:53.098 "nvme_admin": false, 00:36:53.098 "nvme_io": false, 00:36:53.098 "nvme_io_md": false, 00:36:53.098 "write_zeroes": true, 00:36:53.098 "zcopy": true, 00:36:53.098 "get_zone_info": false, 00:36:53.098 "zone_management": false, 00:36:53.098 "zone_append": false, 00:36:53.098 "compare": false, 00:36:53.098 "compare_and_write": false, 00:36:53.098 "abort": true, 00:36:53.098 "seek_hole": false, 00:36:53.098 "seek_data": false, 00:36:53.098 "copy": true, 00:36:53.098 "nvme_iov_md": false 00:36:53.098 }, 00:36:53.098 "memory_domains": [ 00:36:53.098 { 00:36:53.098 "dma_device_id": "system", 00:36:53.098 "dma_device_type": 1 00:36:53.098 }, 00:36:53.098 { 00:36:53.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:53.098 "dma_device_type": 2 00:36:53.098 } 00:36:53.098 ], 00:36:53.098 "driver_specific": { 00:36:53.098 "passthru": { 00:36:53.098 "name": "pt2", 00:36:53.098 "base_bdev_name": "malloc2" 00:36:53.098 } 00:36:53.098 } 00:36:53.098 }' 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:53.098 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:53.356 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:53.356 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:53.356 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:53.356 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:53.356 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:53.356 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:53.356 19:03:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:36:53.615 [2024-07-25 19:03:54.081459] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:53.615 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=b01b3805-63bb-49f8-81df-b4cccd4aa8c1 00:36:53.615 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z b01b3805-63bb-49f8-81df-b4cccd4aa8c1 ']' 00:36:53.615 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:53.873 [2024-07-25 19:03:54.345307] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:53.873 [2024-07-25 19:03:54.345330] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:53.873 [2024-07-25 19:03:54.345400] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:53.873 [2024-07-25 19:03:54.345457] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:53.873 [2024-07-25 19:03:54.345466] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:36:53.873 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.873 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:36:54.131 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:36:54.131 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:36:54.132 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:36:54.132 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:54.132 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:36:54.132 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:54.701 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:54.701 19:03:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:54.702 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:54.960 [2024-07-25 19:03:55.405463] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:54.960 [2024-07-25 19:03:55.407737] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:54.960 [2024-07-25 19:03:55.407803] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:54.960 [2024-07-25 19:03:55.407889] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:54.960 [2024-07-25 19:03:55.407924] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:54.960 [2024-07-25 19:03:55.407932] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:36:54.960 request: 00:36:54.960 { 00:36:54.960 "name": "raid_bdev1", 00:36:54.960 "raid_level": "raid1", 00:36:54.960 "base_bdevs": [ 00:36:54.960 "malloc1", 00:36:54.960 "malloc2" 00:36:54.960 ], 00:36:54.960 "superblock": false, 00:36:54.960 "method": "bdev_raid_create", 00:36:54.960 "req_id": 1 00:36:54.960 } 00:36:54.960 Got JSON-RPC error response 00:36:54.960 response: 00:36:54.960 { 00:36:54.960 "code": -17, 00:36:54.960 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:54.960 } 00:36:54.960 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:36:54.960 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:54.960 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:54.960 19:03:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:54.960 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:54.960 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:36:55.218 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:36:55.218 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:36:55.219 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:55.477 [2024-07-25 19:03:55.821495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:55.477 [2024-07-25 19:03:55.821552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:55.477 [2024-07-25 19:03:55.821601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:55.477 [2024-07-25 19:03:55.821627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:55.477 [2024-07-25 19:03:55.823912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:55.477 [2024-07-25 19:03:55.823992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:55.477 [2024-07-25 19:03:55.824084] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:55.477 [2024-07-25 19:03:55.824133] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:55.477 pt1 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:55.477 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:55.478 19:03:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:55.478 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:55.478 "name": "raid_bdev1", 00:36:55.478 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:55.478 "strip_size_kb": 0, 00:36:55.478 "state": "configuring", 00:36:55.478 "raid_level": "raid1", 00:36:55.478 "superblock": true, 00:36:55.478 "num_base_bdevs": 2, 00:36:55.478 "num_base_bdevs_discovered": 1, 00:36:55.478 "num_base_bdevs_operational": 2, 00:36:55.478 "base_bdevs_list": [ 00:36:55.478 { 00:36:55.478 "name": "pt1", 00:36:55.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:55.478 "is_configured": true, 00:36:55.478 "data_offset": 256, 00:36:55.478 "data_size": 7936 00:36:55.478 }, 00:36:55.478 { 00:36:55.478 "name": null, 00:36:55.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:55.478 "is_configured": false, 00:36:55.478 "data_offset": 256, 00:36:55.478 "data_size": 7936 00:36:55.478 } 00:36:55.478 ] 00:36:55.478 }' 00:36:55.478 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:55.478 19:03:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:56.045 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:36:56.045 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:36:56.045 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:36:56.045 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:56.306 [2024-07-25 19:03:56.757653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:56.306 [2024-07-25 19:03:56.757731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:56.306 [2024-07-25 19:03:56.757762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:56.306 [2024-07-25 19:03:56.757802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:56.306 [2024-07-25 19:03:56.758041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:56.306 [2024-07-25 19:03:56.758095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:56.306 [2024-07-25 19:03:56.758184] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:56.306 [2024-07-25 19:03:56.758203] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:56.306 [2024-07-25 19:03:56.758285] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:36:56.306 [2024-07-25 19:03:56.758294] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:56.306 [2024-07-25 19:03:56.758361] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:36:56.306 [2024-07-25 19:03:56.758449] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:36:56.306 [2024-07-25 19:03:56.758466] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:36:56.306 [2024-07-25 19:03:56.758546] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:56.306 pt2 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.306 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.584 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:56.584 "name": "raid_bdev1", 00:36:56.584 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:56.584 "strip_size_kb": 0, 00:36:56.584 "state": "online", 00:36:56.584 "raid_level": "raid1", 00:36:56.584 "superblock": true, 00:36:56.584 "num_base_bdevs": 2, 00:36:56.584 "num_base_bdevs_discovered": 2, 00:36:56.584 "num_base_bdevs_operational": 2, 00:36:56.584 "base_bdevs_list": [ 00:36:56.584 { 00:36:56.584 "name": "pt1", 00:36:56.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:56.584 "is_configured": true, 00:36:56.584 "data_offset": 256, 00:36:56.584 "data_size": 7936 00:36:56.584 }, 00:36:56.584 { 00:36:56.584 "name": "pt2", 00:36:56.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:56.584 "is_configured": true, 00:36:56.584 "data_offset": 256, 00:36:56.584 "data_size": 7936 00:36:56.584 } 00:36:56.584 ] 00:36:56.584 }' 00:36:56.584 19:03:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:56.584 19:03:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:57.166 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:57.424 [2024-07-25 19:03:57.754035] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:57.424 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:57.424 "name": "raid_bdev1", 00:36:57.424 "aliases": [ 00:36:57.424 "b01b3805-63bb-49f8-81df-b4cccd4aa8c1" 00:36:57.424 ], 00:36:57.424 "product_name": "Raid Volume", 00:36:57.424 "block_size": 4096, 00:36:57.424 "num_blocks": 7936, 00:36:57.424 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:57.424 "md_size": 32, 00:36:57.424 "md_interleave": false, 00:36:57.424 "dif_type": 0, 00:36:57.424 "assigned_rate_limits": { 00:36:57.424 "rw_ios_per_sec": 0, 00:36:57.424 "rw_mbytes_per_sec": 0, 00:36:57.424 "r_mbytes_per_sec": 0, 00:36:57.424 "w_mbytes_per_sec": 0 00:36:57.424 }, 00:36:57.424 "claimed": false, 00:36:57.424 "zoned": false, 00:36:57.424 "supported_io_types": { 00:36:57.424 "read": true, 00:36:57.424 "write": true, 00:36:57.424 "unmap": false, 00:36:57.424 "flush": false, 00:36:57.424 "reset": true, 00:36:57.424 "nvme_admin": false, 00:36:57.424 "nvme_io": false, 00:36:57.424 "nvme_io_md": false, 00:36:57.424 "write_zeroes": true, 00:36:57.424 "zcopy": false, 00:36:57.424 "get_zone_info": false, 00:36:57.424 "zone_management": false, 00:36:57.424 "zone_append": false, 00:36:57.424 "compare": false, 00:36:57.424 "compare_and_write": false, 00:36:57.424 "abort": false, 00:36:57.424 "seek_hole": false, 00:36:57.424 "seek_data": false, 00:36:57.424 "copy": false, 00:36:57.424 "nvme_iov_md": false 00:36:57.424 }, 00:36:57.424 "memory_domains": [ 00:36:57.424 { 00:36:57.424 "dma_device_id": "system", 00:36:57.424 "dma_device_type": 1 00:36:57.424 }, 00:36:57.424 { 00:36:57.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.424 "dma_device_type": 2 00:36:57.424 }, 00:36:57.424 { 00:36:57.424 "dma_device_id": "system", 00:36:57.424 "dma_device_type": 1 00:36:57.424 }, 00:36:57.424 { 00:36:57.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.424 "dma_device_type": 2 00:36:57.424 } 00:36:57.424 ], 00:36:57.424 "driver_specific": { 00:36:57.424 "raid": { 00:36:57.424 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:57.424 "strip_size_kb": 0, 00:36:57.424 "state": "online", 00:36:57.424 "raid_level": "raid1", 00:36:57.424 "superblock": true, 00:36:57.424 "num_base_bdevs": 2, 00:36:57.424 "num_base_bdevs_discovered": 2, 00:36:57.424 "num_base_bdevs_operational": 2, 00:36:57.424 "base_bdevs_list": [ 00:36:57.424 { 00:36:57.424 "name": "pt1", 00:36:57.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:57.424 "is_configured": true, 00:36:57.424 "data_offset": 256, 00:36:57.424 "data_size": 7936 00:36:57.424 }, 00:36:57.424 { 00:36:57.424 "name": "pt2", 00:36:57.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:57.424 "is_configured": true, 00:36:57.424 "data_offset": 256, 00:36:57.424 "data_size": 7936 00:36:57.424 } 00:36:57.424 ] 00:36:57.424 } 00:36:57.424 } 00:36:57.424 }' 00:36:57.424 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:57.424 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:57.424 pt2' 00:36:57.424 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:57.424 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:57.424 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:57.424 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:57.424 "name": "pt1", 00:36:57.424 "aliases": [ 00:36:57.424 "00000000-0000-0000-0000-000000000001" 00:36:57.424 ], 00:36:57.424 "product_name": "passthru", 00:36:57.424 "block_size": 4096, 00:36:57.424 "num_blocks": 8192, 00:36:57.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:57.424 "md_size": 32, 00:36:57.424 "md_interleave": false, 00:36:57.424 "dif_type": 0, 00:36:57.424 "assigned_rate_limits": { 00:36:57.424 "rw_ios_per_sec": 0, 00:36:57.424 "rw_mbytes_per_sec": 0, 00:36:57.424 "r_mbytes_per_sec": 0, 00:36:57.424 "w_mbytes_per_sec": 0 00:36:57.424 }, 00:36:57.424 "claimed": true, 00:36:57.424 "claim_type": "exclusive_write", 00:36:57.425 "zoned": false, 00:36:57.425 "supported_io_types": { 00:36:57.425 "read": true, 00:36:57.425 "write": true, 00:36:57.425 "unmap": true, 00:36:57.425 "flush": true, 00:36:57.425 "reset": true, 00:36:57.425 "nvme_admin": false, 00:36:57.425 "nvme_io": false, 00:36:57.425 "nvme_io_md": false, 00:36:57.425 "write_zeroes": true, 00:36:57.425 "zcopy": true, 00:36:57.425 "get_zone_info": false, 00:36:57.425 "zone_management": false, 00:36:57.425 "zone_append": false, 00:36:57.425 "compare": false, 00:36:57.425 "compare_and_write": false, 00:36:57.425 "abort": true, 00:36:57.425 "seek_hole": false, 00:36:57.425 "seek_data": false, 00:36:57.425 "copy": true, 00:36:57.425 "nvme_iov_md": false 00:36:57.425 }, 00:36:57.425 "memory_domains": [ 00:36:57.425 { 00:36:57.425 "dma_device_id": "system", 00:36:57.425 "dma_device_type": 1 00:36:57.425 }, 00:36:57.425 { 00:36:57.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.425 "dma_device_type": 2 00:36:57.425 } 00:36:57.425 ], 00:36:57.425 "driver_specific": { 00:36:57.425 "passthru": { 00:36:57.425 "name": "pt1", 00:36:57.425 "base_bdev_name": "malloc1" 00:36:57.425 } 00:36:57.425 } 00:36:57.425 }' 00:36:57.425 19:03:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:57.683 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:57.942 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:57.942 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:57.942 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:57.942 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:57.942 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:58.201 "name": "pt2", 00:36:58.201 "aliases": [ 00:36:58.201 "00000000-0000-0000-0000-000000000002" 00:36:58.201 ], 00:36:58.201 "product_name": "passthru", 00:36:58.201 "block_size": 4096, 00:36:58.201 "num_blocks": 8192, 00:36:58.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:58.201 "md_size": 32, 00:36:58.201 "md_interleave": false, 00:36:58.201 "dif_type": 0, 00:36:58.201 "assigned_rate_limits": { 00:36:58.201 "rw_ios_per_sec": 0, 00:36:58.201 "rw_mbytes_per_sec": 0, 00:36:58.201 "r_mbytes_per_sec": 0, 00:36:58.201 "w_mbytes_per_sec": 0 00:36:58.201 }, 00:36:58.201 "claimed": true, 00:36:58.201 "claim_type": "exclusive_write", 00:36:58.201 "zoned": false, 00:36:58.201 "supported_io_types": { 00:36:58.201 "read": true, 00:36:58.201 "write": true, 00:36:58.201 "unmap": true, 00:36:58.201 "flush": true, 00:36:58.201 "reset": true, 00:36:58.201 "nvme_admin": false, 00:36:58.201 "nvme_io": false, 00:36:58.201 "nvme_io_md": false, 00:36:58.201 "write_zeroes": true, 00:36:58.201 "zcopy": true, 00:36:58.201 "get_zone_info": false, 00:36:58.201 "zone_management": false, 00:36:58.201 "zone_append": false, 00:36:58.201 "compare": false, 00:36:58.201 "compare_and_write": false, 00:36:58.201 "abort": true, 00:36:58.201 "seek_hole": false, 00:36:58.201 "seek_data": false, 00:36:58.201 "copy": true, 00:36:58.201 "nvme_iov_md": false 00:36:58.201 }, 00:36:58.201 "memory_domains": [ 00:36:58.201 { 00:36:58.201 "dma_device_id": "system", 00:36:58.201 "dma_device_type": 1 00:36:58.201 }, 00:36:58.201 { 00:36:58.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.201 "dma_device_type": 2 00:36:58.201 } 00:36:58.201 ], 00:36:58.201 "driver_specific": { 00:36:58.201 "passthru": { 00:36:58.201 "name": "pt2", 00:36:58.201 "base_bdev_name": "malloc2" 00:36:58.201 } 00:36:58.201 } 00:36:58.201 }' 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:58.201 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.460 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.460 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:58.460 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:58.460 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:58.460 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:58.460 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:58.460 19:03:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:36:58.719 [2024-07-25 19:03:59.194594] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:58.719 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' b01b3805-63bb-49f8-81df-b4cccd4aa8c1 '!=' b01b3805-63bb-49f8-81df-b4cccd4aa8c1 ']' 00:36:58.719 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:36:58.719 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:58.719 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:36:58.719 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:58.977 [2024-07-25 19:03:59.458510] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.977 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.236 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:59.236 "name": "raid_bdev1", 00:36:59.236 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:36:59.236 "strip_size_kb": 0, 00:36:59.236 "state": "online", 00:36:59.236 "raid_level": "raid1", 00:36:59.236 "superblock": true, 00:36:59.236 "num_base_bdevs": 2, 00:36:59.236 "num_base_bdevs_discovered": 1, 00:36:59.236 "num_base_bdevs_operational": 1, 00:36:59.236 "base_bdevs_list": [ 00:36:59.236 { 00:36:59.236 "name": null, 00:36:59.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.236 "is_configured": false, 00:36:59.236 "data_offset": 256, 00:36:59.236 "data_size": 7936 00:36:59.236 }, 00:36:59.236 { 00:36:59.236 "name": "pt2", 00:36:59.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:59.236 "is_configured": true, 00:36:59.236 "data_offset": 256, 00:36:59.236 "data_size": 7936 00:36:59.236 } 00:36:59.236 ] 00:36:59.236 }' 00:36:59.236 19:03:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:59.236 19:03:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:59.803 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:00.061 [2024-07-25 19:04:00.406616] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:00.061 [2024-07-25 19:04:00.406640] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:00.061 [2024-07-25 19:04:00.406684] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:00.061 [2024-07-25 19:04:00.406722] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:00.061 [2024-07-25 19:04:00.406731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:37:00.061 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.061 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:37:00.320 19:04:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:00.578 [2024-07-25 19:04:01.030683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:00.578 [2024-07-25 19:04:01.030748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:00.578 [2024-07-25 19:04:01.030774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:00.578 [2024-07-25 19:04:01.030800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:00.578 [2024-07-25 19:04:01.033015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:00.578 [2024-07-25 19:04:01.033079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:00.578 [2024-07-25 19:04:01.033170] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:00.578 [2024-07-25 19:04:01.033210] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:00.578 [2024-07-25 19:04:01.033264] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:37:00.578 [2024-07-25 19:04:01.033272] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:00.578 [2024-07-25 19:04:01.033349] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:00.578 [2024-07-25 19:04:01.033425] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:37:00.578 [2024-07-25 19:04:01.033432] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:37:00.578 [2024-07-25 19:04:01.033496] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:00.578 pt2 00:37:00.578 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:00.578 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:00.578 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.579 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:00.837 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:00.837 "name": "raid_bdev1", 00:37:00.837 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:37:00.837 "strip_size_kb": 0, 00:37:00.837 "state": "online", 00:37:00.837 "raid_level": "raid1", 00:37:00.837 "superblock": true, 00:37:00.837 "num_base_bdevs": 2, 00:37:00.837 "num_base_bdevs_discovered": 1, 00:37:00.837 "num_base_bdevs_operational": 1, 00:37:00.837 "base_bdevs_list": [ 00:37:00.837 { 00:37:00.837 "name": null, 00:37:00.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.837 "is_configured": false, 00:37:00.837 "data_offset": 256, 00:37:00.837 "data_size": 7936 00:37:00.837 }, 00:37:00.837 { 00:37:00.837 "name": "pt2", 00:37:00.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:00.837 "is_configured": true, 00:37:00.837 "data_offset": 256, 00:37:00.837 "data_size": 7936 00:37:00.837 } 00:37:00.837 ] 00:37:00.837 }' 00:37:00.837 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:00.837 19:04:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:01.404 19:04:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:01.664 [2024-07-25 19:04:02.082826] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:01.664 [2024-07-25 19:04:02.082854] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:01.664 [2024-07-25 19:04:02.082888] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:01.664 [2024-07-25 19:04:02.082916] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:01.664 [2024-07-25 19:04:02.082923] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:37:01.664 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.664 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:37:01.923 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:37:01.923 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:37:01.923 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:37:01.923 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:01.923 [2024-07-25 19:04:02.498900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:01.923 [2024-07-25 19:04:02.498949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:01.923 [2024-07-25 19:04:02.498982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:01.923 [2024-07-25 19:04:02.499002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:01.923 [2024-07-25 19:04:02.501143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:01.923 [2024-07-25 19:04:02.501207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:01.923 [2024-07-25 19:04:02.501286] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:01.923 [2024-07-25 19:04:02.501317] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:01.923 [2024-07-25 19:04:02.501388] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:01.923 [2024-07-25 19:04:02.501397] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:01.923 [2024-07-25 19:04:02.501413] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:37:01.923 [2024-07-25 19:04:02.501452] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:01.923 [2024-07-25 19:04:02.501494] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:37:01.923 [2024-07-25 19:04:02.501501] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:01.923 [2024-07-25 19:04:02.501569] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:01.923 [2024-07-25 19:04:02.501635] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:37:01.923 [2024-07-25 19:04:02.501642] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:37:01.923 [2024-07-25 19:04:02.501710] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:01.923 pt1 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.182 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.441 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.441 "name": "raid_bdev1", 00:37:02.441 "uuid": "b01b3805-63bb-49f8-81df-b4cccd4aa8c1", 00:37:02.441 "strip_size_kb": 0, 00:37:02.441 "state": "online", 00:37:02.441 "raid_level": "raid1", 00:37:02.441 "superblock": true, 00:37:02.441 "num_base_bdevs": 2, 00:37:02.441 "num_base_bdevs_discovered": 1, 00:37:02.441 "num_base_bdevs_operational": 1, 00:37:02.441 "base_bdevs_list": [ 00:37:02.441 { 00:37:02.441 "name": null, 00:37:02.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.441 "is_configured": false, 00:37:02.441 "data_offset": 256, 00:37:02.441 "data_size": 7936 00:37:02.441 }, 00:37:02.441 { 00:37:02.441 "name": "pt2", 00:37:02.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:02.441 "is_configured": true, 00:37:02.441 "data_offset": 256, 00:37:02.441 "data_size": 7936 00:37:02.441 } 00:37:02.441 ] 00:37:02.441 }' 00:37:02.441 19:04:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.441 19:04:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:03.009 19:04:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:03.009 19:04:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:37:03.009 19:04:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:37:03.009 19:04:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:03.009 19:04:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:37:03.267 [2024-07-25 19:04:03.719261] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' b01b3805-63bb-49f8-81df-b4cccd4aa8c1 '!=' b01b3805-63bb-49f8-81df-b4cccd4aa8c1 ']' 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 159962 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 159962 ']' 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 159962 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 159962 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 159962' 00:37:03.267 killing process with pid 159962 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 159962 00:37:03.267 [2024-07-25 19:04:03.771586] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:03.267 [2024-07-25 19:04:03.771715] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:03.267 19:04:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 159962 00:37:03.267 [2024-07-25 19:04:03.771824] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:03.267 [2024-07-25 19:04:03.771923] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:37:03.526 [2024-07-25 19:04:03.951538] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:04.905 19:04:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:37:04.905 00:37:04.905 real 0m15.285s 00:37:04.905 user 0m26.745s 00:37:04.905 sys 0m2.609s 00:37:04.905 19:04:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:04.905 19:04:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:04.905 ************************************ 00:37:04.905 END TEST raid_superblock_test_md_separate 00:37:04.905 ************************************ 00:37:04.905 19:04:05 bdev_raid -- bdev/bdev_raid.sh@987 -- # '[' true = true ']' 00:37:04.905 19:04:05 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:37:04.905 19:04:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:37:04.905 19:04:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:04.905 19:04:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:04.905 ************************************ 00:37:04.905 START TEST raid_rebuild_test_sb_md_separate 00:37:04.905 ************************************ 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=160470 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 160470 /var/tmp/spdk-raid.sock 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 160470 ']' 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:04.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:04.905 19:04:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:04.906 [2024-07-25 19:04:05.296343] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:04.906 [2024-07-25 19:04:05.297334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160470 ] 00:37:04.906 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:04.906 Zero copy mechanism will not be used. 00:37:04.906 [2024-07-25 19:04:05.475769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.164 [2024-07-25 19:04:05.716522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.424 [2024-07-25 19:04:05.965076] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:05.683 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:05.683 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:37:05.683 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:37:05.683 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:37:05.942 BaseBdev1_malloc 00:37:05.942 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:06.200 [2024-07-25 19:04:06.606700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:06.200 [2024-07-25 19:04:06.606967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:06.200 [2024-07-25 19:04:06.607046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:37:06.200 [2024-07-25 19:04:06.607158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:06.201 [2024-07-25 19:04:06.609507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:06.201 [2024-07-25 19:04:06.609683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:06.201 BaseBdev1 00:37:06.201 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:37:06.201 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:37:06.460 BaseBdev2_malloc 00:37:06.460 19:04:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:06.719 [2024-07-25 19:04:07.051731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:06.719 [2024-07-25 19:04:07.052095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:06.719 [2024-07-25 19:04:07.052256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:06.719 [2024-07-25 19:04:07.052350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:06.719 [2024-07-25 19:04:07.054699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:06.719 [2024-07-25 19:04:07.054857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:06.719 BaseBdev2 00:37:06.719 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:37:06.978 spare_malloc 00:37:06.978 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:06.978 spare_delay 00:37:06.978 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:07.237 [2024-07-25 19:04:07.708003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:07.237 [2024-07-25 19:04:07.708298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:07.237 [2024-07-25 19:04:07.708370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:07.237 [2024-07-25 19:04:07.708470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:07.237 [2024-07-25 19:04:07.710800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:07.237 [2024-07-25 19:04:07.710966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:07.237 spare 00:37:07.237 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:07.497 [2024-07-25 19:04:07.948082] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:07.497 [2024-07-25 19:04:07.950379] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:07.497 [2024-07-25 19:04:07.950704] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:37:07.497 [2024-07-25 19:04:07.950799] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:07.497 [2024-07-25 19:04:07.950964] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:07.497 [2024-07-25 19:04:07.951187] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:37:07.497 [2024-07-25 19:04:07.951227] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:37:07.497 [2024-07-25 19:04:07.951433] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.497 19:04:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:07.757 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:07.757 "name": "raid_bdev1", 00:37:07.757 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:07.757 "strip_size_kb": 0, 00:37:07.757 "state": "online", 00:37:07.757 "raid_level": "raid1", 00:37:07.757 "superblock": true, 00:37:07.757 "num_base_bdevs": 2, 00:37:07.757 "num_base_bdevs_discovered": 2, 00:37:07.757 "num_base_bdevs_operational": 2, 00:37:07.757 "base_bdevs_list": [ 00:37:07.757 { 00:37:07.757 "name": "BaseBdev1", 00:37:07.757 "uuid": "48c1fa4c-549a-50b7-b3d3-ce334e50f595", 00:37:07.757 "is_configured": true, 00:37:07.757 "data_offset": 256, 00:37:07.757 "data_size": 7936 00:37:07.757 }, 00:37:07.757 { 00:37:07.757 "name": "BaseBdev2", 00:37:07.757 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:07.757 "is_configured": true, 00:37:07.757 "data_offset": 256, 00:37:07.757 "data_size": 7936 00:37:07.757 } 00:37:07.757 ] 00:37:07.757 }' 00:37:07.757 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:07.757 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.325 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:08.325 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:37:08.585 [2024-07-25 19:04:08.928389] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:08.585 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:37:08.585 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:08.585 19:04:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:08.845 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:09.105 [2024-07-25 19:04:09.428327] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:09.105 /dev/nbd0 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:09.105 1+0 records in 00:37:09.105 1+0 records out 00:37:09.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047201 s, 8.7 MB/s 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:37:09.105 19:04:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:37:10.043 7936+0 records in 00:37:10.043 7936+0 records out 00:37:10.043 32505856 bytes (33 MB, 31 MiB) copied, 0.784294 s, 41.4 MB/s 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:10.043 [2024-07-25 19:04:10.603550] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:10.043 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:10.302 [2024-07-25 19:04:10.807274] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.302 19:04:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:10.560 19:04:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:10.560 "name": "raid_bdev1", 00:37:10.560 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:10.560 "strip_size_kb": 0, 00:37:10.560 "state": "online", 00:37:10.560 "raid_level": "raid1", 00:37:10.560 "superblock": true, 00:37:10.560 "num_base_bdevs": 2, 00:37:10.560 "num_base_bdevs_discovered": 1, 00:37:10.560 "num_base_bdevs_operational": 1, 00:37:10.560 "base_bdevs_list": [ 00:37:10.560 { 00:37:10.560 "name": null, 00:37:10.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.560 "is_configured": false, 00:37:10.560 "data_offset": 256, 00:37:10.560 "data_size": 7936 00:37:10.560 }, 00:37:10.560 { 00:37:10.560 "name": "BaseBdev2", 00:37:10.560 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:10.560 "is_configured": true, 00:37:10.560 "data_offset": 256, 00:37:10.560 "data_size": 7936 00:37:10.560 } 00:37:10.560 ] 00:37:10.560 }' 00:37:10.560 19:04:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:10.560 19:04:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:11.126 19:04:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:11.384 [2024-07-25 19:04:11.751391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:11.384 [2024-07-25 19:04:11.769053] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018cff0 00:37:11.384 [2024-07-25 19:04:11.771065] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:11.384 19:04:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:12.320 19:04:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:12.320 19:04:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:12.320 19:04:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:12.320 19:04:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:12.320 19:04:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:12.320 19:04:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:12.320 19:04:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:12.578 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:12.578 "name": "raid_bdev1", 00:37:12.578 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:12.578 "strip_size_kb": 0, 00:37:12.578 "state": "online", 00:37:12.578 "raid_level": "raid1", 00:37:12.578 "superblock": true, 00:37:12.578 "num_base_bdevs": 2, 00:37:12.578 "num_base_bdevs_discovered": 2, 00:37:12.578 "num_base_bdevs_operational": 2, 00:37:12.578 "process": { 00:37:12.578 "type": "rebuild", 00:37:12.578 "target": "spare", 00:37:12.578 "progress": { 00:37:12.578 "blocks": 3072, 00:37:12.578 "percent": 38 00:37:12.578 } 00:37:12.578 }, 00:37:12.578 "base_bdevs_list": [ 00:37:12.578 { 00:37:12.578 "name": "spare", 00:37:12.578 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:12.578 "is_configured": true, 00:37:12.578 "data_offset": 256, 00:37:12.578 "data_size": 7936 00:37:12.578 }, 00:37:12.578 { 00:37:12.578 "name": "BaseBdev2", 00:37:12.578 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:12.578 "is_configured": true, 00:37:12.578 "data_offset": 256, 00:37:12.578 "data_size": 7936 00:37:12.578 } 00:37:12.578 ] 00:37:12.578 }' 00:37:12.578 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:12.578 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:12.578 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:12.578 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:12.578 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:12.837 [2024-07-25 19:04:13.329161] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:12.837 [2024-07-25 19:04:13.380295] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:12.837 [2024-07-25 19:04:13.380478] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:12.837 [2024-07-25 19:04:13.380540] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:12.837 [2024-07-25 19:04:13.380616] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:12.837 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:13.095 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.095 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:13.095 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:13.095 "name": "raid_bdev1", 00:37:13.095 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:13.095 "strip_size_kb": 0, 00:37:13.095 "state": "online", 00:37:13.095 "raid_level": "raid1", 00:37:13.095 "superblock": true, 00:37:13.095 "num_base_bdevs": 2, 00:37:13.095 "num_base_bdevs_discovered": 1, 00:37:13.095 "num_base_bdevs_operational": 1, 00:37:13.095 "base_bdevs_list": [ 00:37:13.095 { 00:37:13.095 "name": null, 00:37:13.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:13.095 "is_configured": false, 00:37:13.095 "data_offset": 256, 00:37:13.095 "data_size": 7936 00:37:13.095 }, 00:37:13.095 { 00:37:13.095 "name": "BaseBdev2", 00:37:13.095 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:13.095 "is_configured": true, 00:37:13.095 "data_offset": 256, 00:37:13.095 "data_size": 7936 00:37:13.095 } 00:37:13.095 ] 00:37:13.095 }' 00:37:13.095 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:13.095 19:04:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:13.660 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:13.660 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:13.660 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:13.660 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:13.660 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:13.660 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.660 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:13.918 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:13.918 "name": "raid_bdev1", 00:37:13.918 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:13.918 "strip_size_kb": 0, 00:37:13.918 "state": "online", 00:37:13.918 "raid_level": "raid1", 00:37:13.918 "superblock": true, 00:37:13.918 "num_base_bdevs": 2, 00:37:13.918 "num_base_bdevs_discovered": 1, 00:37:13.918 "num_base_bdevs_operational": 1, 00:37:13.918 "base_bdevs_list": [ 00:37:13.918 { 00:37:13.918 "name": null, 00:37:13.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:13.918 "is_configured": false, 00:37:13.918 "data_offset": 256, 00:37:13.918 "data_size": 7936 00:37:13.918 }, 00:37:13.918 { 00:37:13.918 "name": "BaseBdev2", 00:37:13.918 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:13.918 "is_configured": true, 00:37:13.918 "data_offset": 256, 00:37:13.918 "data_size": 7936 00:37:13.918 } 00:37:13.918 ] 00:37:13.918 }' 00:37:13.918 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:13.918 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:13.918 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:13.918 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:13.918 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:14.178 [2024-07-25 19:04:14.693302] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:14.178 [2024-07-25 19:04:14.709552] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:37:14.178 [2024-07-25 19:04:14.711642] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:14.178 19:04:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:15.556 "name": "raid_bdev1", 00:37:15.556 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:15.556 "strip_size_kb": 0, 00:37:15.556 "state": "online", 00:37:15.556 "raid_level": "raid1", 00:37:15.556 "superblock": true, 00:37:15.556 "num_base_bdevs": 2, 00:37:15.556 "num_base_bdevs_discovered": 2, 00:37:15.556 "num_base_bdevs_operational": 2, 00:37:15.556 "process": { 00:37:15.556 "type": "rebuild", 00:37:15.556 "target": "spare", 00:37:15.556 "progress": { 00:37:15.556 "blocks": 3072, 00:37:15.556 "percent": 38 00:37:15.556 } 00:37:15.556 }, 00:37:15.556 "base_bdevs_list": [ 00:37:15.556 { 00:37:15.556 "name": "spare", 00:37:15.556 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:15.556 "is_configured": true, 00:37:15.556 "data_offset": 256, 00:37:15.556 "data_size": 7936 00:37:15.556 }, 00:37:15.556 { 00:37:15.556 "name": "BaseBdev2", 00:37:15.556 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:15.556 "is_configured": true, 00:37:15.556 "data_offset": 256, 00:37:15.556 "data_size": 7936 00:37:15.556 } 00:37:15.556 ] 00:37:15.556 }' 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:15.556 19:04:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:37:15.556 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1393 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:15.556 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.816 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:15.816 "name": "raid_bdev1", 00:37:15.816 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:15.816 "strip_size_kb": 0, 00:37:15.816 "state": "online", 00:37:15.816 "raid_level": "raid1", 00:37:15.816 "superblock": true, 00:37:15.816 "num_base_bdevs": 2, 00:37:15.816 "num_base_bdevs_discovered": 2, 00:37:15.816 "num_base_bdevs_operational": 2, 00:37:15.816 "process": { 00:37:15.816 "type": "rebuild", 00:37:15.816 "target": "spare", 00:37:15.816 "progress": { 00:37:15.816 "blocks": 3840, 00:37:15.816 "percent": 48 00:37:15.816 } 00:37:15.816 }, 00:37:15.816 "base_bdevs_list": [ 00:37:15.816 { 00:37:15.816 "name": "spare", 00:37:15.816 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:15.816 "is_configured": true, 00:37:15.816 "data_offset": 256, 00:37:15.816 "data_size": 7936 00:37:15.816 }, 00:37:15.816 { 00:37:15.816 "name": "BaseBdev2", 00:37:15.816 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:15.816 "is_configured": true, 00:37:15.816 "data_offset": 256, 00:37:15.816 "data_size": 7936 00:37:15.816 } 00:37:15.816 ] 00:37:15.816 }' 00:37:15.816 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:15.816 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:15.816 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:15.816 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:15.816 19:04:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.194 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:17.194 "name": "raid_bdev1", 00:37:17.194 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:17.194 "strip_size_kb": 0, 00:37:17.194 "state": "online", 00:37:17.194 "raid_level": "raid1", 00:37:17.194 "superblock": true, 00:37:17.194 "num_base_bdevs": 2, 00:37:17.194 "num_base_bdevs_discovered": 2, 00:37:17.194 "num_base_bdevs_operational": 2, 00:37:17.194 "process": { 00:37:17.194 "type": "rebuild", 00:37:17.194 "target": "spare", 00:37:17.194 "progress": { 00:37:17.194 "blocks": 7168, 00:37:17.194 "percent": 90 00:37:17.195 } 00:37:17.195 }, 00:37:17.195 "base_bdevs_list": [ 00:37:17.195 { 00:37:17.195 "name": "spare", 00:37:17.195 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:17.195 "is_configured": true, 00:37:17.195 "data_offset": 256, 00:37:17.195 "data_size": 7936 00:37:17.195 }, 00:37:17.195 { 00:37:17.195 "name": "BaseBdev2", 00:37:17.195 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:17.195 "is_configured": true, 00:37:17.195 "data_offset": 256, 00:37:17.195 "data_size": 7936 00:37:17.195 } 00:37:17.195 ] 00:37:17.195 }' 00:37:17.195 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:17.195 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:17.195 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:17.195 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:17.195 19:04:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:37:17.453 [2024-07-25 19:04:17.831076] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:17.453 [2024-07-25 19:04:17.831294] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:17.453 [2024-07-25 19:04:17.831521] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.390 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.649 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:18.649 "name": "raid_bdev1", 00:37:18.649 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:18.649 "strip_size_kb": 0, 00:37:18.649 "state": "online", 00:37:18.649 "raid_level": "raid1", 00:37:18.649 "superblock": true, 00:37:18.649 "num_base_bdevs": 2, 00:37:18.649 "num_base_bdevs_discovered": 2, 00:37:18.649 "num_base_bdevs_operational": 2, 00:37:18.649 "base_bdevs_list": [ 00:37:18.649 { 00:37:18.649 "name": "spare", 00:37:18.649 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:18.649 "is_configured": true, 00:37:18.649 "data_offset": 256, 00:37:18.649 "data_size": 7936 00:37:18.649 }, 00:37:18.649 { 00:37:18.649 "name": "BaseBdev2", 00:37:18.649 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:18.649 "is_configured": true, 00:37:18.649 "data_offset": 256, 00:37:18.649 "data_size": 7936 00:37:18.649 } 00:37:18.649 ] 00:37:18.649 }' 00:37:18.649 19:04:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:18.649 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:18.650 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.650 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:18.909 "name": "raid_bdev1", 00:37:18.909 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:18.909 "strip_size_kb": 0, 00:37:18.909 "state": "online", 00:37:18.909 "raid_level": "raid1", 00:37:18.909 "superblock": true, 00:37:18.909 "num_base_bdevs": 2, 00:37:18.909 "num_base_bdevs_discovered": 2, 00:37:18.909 "num_base_bdevs_operational": 2, 00:37:18.909 "base_bdevs_list": [ 00:37:18.909 { 00:37:18.909 "name": "spare", 00:37:18.909 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:18.909 "is_configured": true, 00:37:18.909 "data_offset": 256, 00:37:18.909 "data_size": 7936 00:37:18.909 }, 00:37:18.909 { 00:37:18.909 "name": "BaseBdev2", 00:37:18.909 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:18.909 "is_configured": true, 00:37:18.909 "data_offset": 256, 00:37:18.909 "data_size": 7936 00:37:18.909 } 00:37:18.909 ] 00:37:18.909 }' 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.909 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.204 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:19.204 "name": "raid_bdev1", 00:37:19.204 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:19.204 "strip_size_kb": 0, 00:37:19.204 "state": "online", 00:37:19.204 "raid_level": "raid1", 00:37:19.204 "superblock": true, 00:37:19.204 "num_base_bdevs": 2, 00:37:19.204 "num_base_bdevs_discovered": 2, 00:37:19.204 "num_base_bdevs_operational": 2, 00:37:19.204 "base_bdevs_list": [ 00:37:19.204 { 00:37:19.204 "name": "spare", 00:37:19.204 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:19.204 "is_configured": true, 00:37:19.204 "data_offset": 256, 00:37:19.204 "data_size": 7936 00:37:19.204 }, 00:37:19.204 { 00:37:19.204 "name": "BaseBdev2", 00:37:19.204 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:19.204 "is_configured": true, 00:37:19.204 "data_offset": 256, 00:37:19.204 "data_size": 7936 00:37:19.204 } 00:37:19.204 ] 00:37:19.204 }' 00:37:19.204 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:19.204 19:04:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:19.771 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:20.030 [2024-07-25 19:04:20.380052] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:20.030 [2024-07-25 19:04:20.380236] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:20.030 [2024-07-25 19:04:20.380456] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:20.030 [2024-07-25 19:04:20.380621] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:20.030 [2024-07-25 19:04:20.380701] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:37:20.030 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.030 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:20.289 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:20.289 /dev/nbd0 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:20.548 1+0 records in 00:37:20.548 1+0 records out 00:37:20.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424251 s, 9.7 MB/s 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:20.548 19:04:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:20.548 /dev/nbd1 00:37:20.548 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:20.548 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:20.548 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:37:20.548 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:20.549 1+0 records in 00:37:20.549 1+0 records out 00:37:20.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007236 s, 5.7 MB/s 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:37:20.549 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:20.808 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:21.162 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:21.420 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:37:21.421 19:04:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:21.679 [2024-07-25 19:04:22.224440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:21.679 [2024-07-25 19:04:22.224656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.679 [2024-07-25 19:04:22.224747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:37:21.679 [2024-07-25 19:04:22.224845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.679 [2024-07-25 19:04:22.227234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.679 [2024-07-25 19:04:22.227391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:21.679 [2024-07-25 19:04:22.227611] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:21.679 [2024-07-25 19:04:22.227767] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:21.679 [2024-07-25 19:04:22.227918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:21.679 spare 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.679 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.938 [2024-07-25 19:04:22.328079] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:37:21.938 [2024-07-25 19:04:22.328210] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:21.938 [2024-07-25 19:04:22.328403] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:37:21.938 [2024-07-25 19:04:22.328744] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:37:21.938 [2024-07-25 19:04:22.328842] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:37:21.938 [2024-07-25 19:04:22.329069] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:21.938 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:21.938 "name": "raid_bdev1", 00:37:21.938 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:21.938 "strip_size_kb": 0, 00:37:21.938 "state": "online", 00:37:21.938 "raid_level": "raid1", 00:37:21.938 "superblock": true, 00:37:21.938 "num_base_bdevs": 2, 00:37:21.938 "num_base_bdevs_discovered": 2, 00:37:21.938 "num_base_bdevs_operational": 2, 00:37:21.938 "base_bdevs_list": [ 00:37:21.938 { 00:37:21.938 "name": "spare", 00:37:21.938 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:21.938 "is_configured": true, 00:37:21.938 "data_offset": 256, 00:37:21.938 "data_size": 7936 00:37:21.938 }, 00:37:21.938 { 00:37:21.938 "name": "BaseBdev2", 00:37:21.938 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:21.938 "is_configured": true, 00:37:21.938 "data_offset": 256, 00:37:21.938 "data_size": 7936 00:37:21.938 } 00:37:21.938 ] 00:37:21.938 }' 00:37:21.938 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:21.938 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:22.507 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:22.507 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:22.507 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:22.507 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:22.507 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:22.507 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.507 19:04:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.766 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:22.766 "name": "raid_bdev1", 00:37:22.766 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:22.766 "strip_size_kb": 0, 00:37:22.766 "state": "online", 00:37:22.766 "raid_level": "raid1", 00:37:22.766 "superblock": true, 00:37:22.766 "num_base_bdevs": 2, 00:37:22.766 "num_base_bdevs_discovered": 2, 00:37:22.766 "num_base_bdevs_operational": 2, 00:37:22.766 "base_bdevs_list": [ 00:37:22.766 { 00:37:22.766 "name": "spare", 00:37:22.766 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:22.766 "is_configured": true, 00:37:22.766 "data_offset": 256, 00:37:22.766 "data_size": 7936 00:37:22.766 }, 00:37:22.766 { 00:37:22.766 "name": "BaseBdev2", 00:37:22.766 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:22.766 "is_configured": true, 00:37:22.766 "data_offset": 256, 00:37:22.766 "data_size": 7936 00:37:22.766 } 00:37:22.766 ] 00:37:22.766 }' 00:37:22.766 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:22.766 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:22.766 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:22.766 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:22.767 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.767 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:23.024 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:37:23.024 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:23.283 [2024-07-25 19:04:23.821348] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.283 19:04:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:23.543 19:04:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:23.543 "name": "raid_bdev1", 00:37:23.543 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:23.543 "strip_size_kb": 0, 00:37:23.543 "state": "online", 00:37:23.543 "raid_level": "raid1", 00:37:23.543 "superblock": true, 00:37:23.543 "num_base_bdevs": 2, 00:37:23.543 "num_base_bdevs_discovered": 1, 00:37:23.543 "num_base_bdevs_operational": 1, 00:37:23.543 "base_bdevs_list": [ 00:37:23.543 { 00:37:23.543 "name": null, 00:37:23.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:23.543 "is_configured": false, 00:37:23.543 "data_offset": 256, 00:37:23.543 "data_size": 7936 00:37:23.543 }, 00:37:23.543 { 00:37:23.543 "name": "BaseBdev2", 00:37:23.543 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:23.543 "is_configured": true, 00:37:23.543 "data_offset": 256, 00:37:23.543 "data_size": 7936 00:37:23.543 } 00:37:23.543 ] 00:37:23.543 }' 00:37:23.543 19:04:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:23.543 19:04:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:24.110 19:04:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:24.110 [2024-07-25 19:04:24.665505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:24.110 [2024-07-25 19:04:24.665845] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:24.110 [2024-07-25 19:04:24.665955] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:24.110 [2024-07-25 19:04:24.666040] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:24.110 [2024-07-25 19:04:24.682265] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:37:24.110 [2024-07-25 19:04:24.684566] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:24.368 19:04:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:37:25.303 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:25.303 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:25.303 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:25.303 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:25.303 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:25.303 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.303 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:25.562 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:25.562 "name": "raid_bdev1", 00:37:25.562 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:25.562 "strip_size_kb": 0, 00:37:25.562 "state": "online", 00:37:25.562 "raid_level": "raid1", 00:37:25.562 "superblock": true, 00:37:25.562 "num_base_bdevs": 2, 00:37:25.562 "num_base_bdevs_discovered": 2, 00:37:25.562 "num_base_bdevs_operational": 2, 00:37:25.562 "process": { 00:37:25.562 "type": "rebuild", 00:37:25.562 "target": "spare", 00:37:25.562 "progress": { 00:37:25.562 "blocks": 3072, 00:37:25.562 "percent": 38 00:37:25.562 } 00:37:25.562 }, 00:37:25.562 "base_bdevs_list": [ 00:37:25.562 { 00:37:25.562 "name": "spare", 00:37:25.562 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:25.562 "is_configured": true, 00:37:25.562 "data_offset": 256, 00:37:25.562 "data_size": 7936 00:37:25.562 }, 00:37:25.562 { 00:37:25.562 "name": "BaseBdev2", 00:37:25.562 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:25.562 "is_configured": true, 00:37:25.562 "data_offset": 256, 00:37:25.562 "data_size": 7936 00:37:25.562 } 00:37:25.562 ] 00:37:25.562 }' 00:37:25.562 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:25.562 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:25.562 19:04:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:25.562 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:25.562 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:25.822 [2024-07-25 19:04:26.242572] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:25.822 [2024-07-25 19:04:26.296476] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:25.822 [2024-07-25 19:04:26.296676] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:25.822 [2024-07-25 19:04:26.296724] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:25.822 [2024-07-25 19:04:26.296805] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.822 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.081 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:26.081 "name": "raid_bdev1", 00:37:26.081 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:26.081 "strip_size_kb": 0, 00:37:26.081 "state": "online", 00:37:26.081 "raid_level": "raid1", 00:37:26.081 "superblock": true, 00:37:26.081 "num_base_bdevs": 2, 00:37:26.081 "num_base_bdevs_discovered": 1, 00:37:26.081 "num_base_bdevs_operational": 1, 00:37:26.081 "base_bdevs_list": [ 00:37:26.081 { 00:37:26.081 "name": null, 00:37:26.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:26.081 "is_configured": false, 00:37:26.081 "data_offset": 256, 00:37:26.081 "data_size": 7936 00:37:26.081 }, 00:37:26.081 { 00:37:26.081 "name": "BaseBdev2", 00:37:26.081 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:26.081 "is_configured": true, 00:37:26.081 "data_offset": 256, 00:37:26.081 "data_size": 7936 00:37:26.081 } 00:37:26.081 ] 00:37:26.081 }' 00:37:26.081 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:26.081 19:04:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:26.649 19:04:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:26.909 [2024-07-25 19:04:27.389834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:26.909 [2024-07-25 19:04:27.390106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.909 [2024-07-25 19:04:27.390179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:37:26.909 [2024-07-25 19:04:27.390273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.909 [2024-07-25 19:04:27.390660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.909 [2024-07-25 19:04:27.390784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:26.909 [2024-07-25 19:04:27.390984] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:26.909 [2024-07-25 19:04:27.391082] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:26.909 [2024-07-25 19:04:27.391149] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:26.909 [2024-07-25 19:04:27.391237] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:26.909 [2024-07-25 19:04:27.407083] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:37:26.909 spare 00:37:26.909 [2024-07-25 19:04:27.409018] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:26.909 19:04:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:37:28.286 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:28.286 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:28.286 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:28.286 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:28.287 "name": "raid_bdev1", 00:37:28.287 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:28.287 "strip_size_kb": 0, 00:37:28.287 "state": "online", 00:37:28.287 "raid_level": "raid1", 00:37:28.287 "superblock": true, 00:37:28.287 "num_base_bdevs": 2, 00:37:28.287 "num_base_bdevs_discovered": 2, 00:37:28.287 "num_base_bdevs_operational": 2, 00:37:28.287 "process": { 00:37:28.287 "type": "rebuild", 00:37:28.287 "target": "spare", 00:37:28.287 "progress": { 00:37:28.287 "blocks": 2816, 00:37:28.287 "percent": 35 00:37:28.287 } 00:37:28.287 }, 00:37:28.287 "base_bdevs_list": [ 00:37:28.287 { 00:37:28.287 "name": "spare", 00:37:28.287 "uuid": "ac44608c-4a57-55f4-b408-5edaa01dbbe2", 00:37:28.287 "is_configured": true, 00:37:28.287 "data_offset": 256, 00:37:28.287 "data_size": 7936 00:37:28.287 }, 00:37:28.287 { 00:37:28.287 "name": "BaseBdev2", 00:37:28.287 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:28.287 "is_configured": true, 00:37:28.287 "data_offset": 256, 00:37:28.287 "data_size": 7936 00:37:28.287 } 00:37:28.287 ] 00:37:28.287 }' 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:28.287 19:04:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:28.546 [2024-07-25 19:04:28.931567] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:28.546 [2024-07-25 19:04:29.021623] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:28.546 [2024-07-25 19:04:29.021827] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:28.546 [2024-07-25 19:04:29.021875] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:28.546 [2024-07-25 19:04:29.021948] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.546 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.804 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:28.804 "name": "raid_bdev1", 00:37:28.805 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:28.805 "strip_size_kb": 0, 00:37:28.805 "state": "online", 00:37:28.805 "raid_level": "raid1", 00:37:28.805 "superblock": true, 00:37:28.805 "num_base_bdevs": 2, 00:37:28.805 "num_base_bdevs_discovered": 1, 00:37:28.805 "num_base_bdevs_operational": 1, 00:37:28.805 "base_bdevs_list": [ 00:37:28.805 { 00:37:28.805 "name": null, 00:37:28.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:28.805 "is_configured": false, 00:37:28.805 "data_offset": 256, 00:37:28.805 "data_size": 7936 00:37:28.805 }, 00:37:28.805 { 00:37:28.805 "name": "BaseBdev2", 00:37:28.805 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:28.805 "is_configured": true, 00:37:28.805 "data_offset": 256, 00:37:28.805 "data_size": 7936 00:37:28.805 } 00:37:28.805 ] 00:37:28.805 }' 00:37:28.805 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:28.805 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:29.371 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:29.371 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:29.371 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:29.371 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:29.371 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:29.371 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.371 19:04:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.939 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:29.939 "name": "raid_bdev1", 00:37:29.939 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:29.939 "strip_size_kb": 0, 00:37:29.939 "state": "online", 00:37:29.939 "raid_level": "raid1", 00:37:29.939 "superblock": true, 00:37:29.939 "num_base_bdevs": 2, 00:37:29.939 "num_base_bdevs_discovered": 1, 00:37:29.939 "num_base_bdevs_operational": 1, 00:37:29.939 "base_bdevs_list": [ 00:37:29.939 { 00:37:29.939 "name": null, 00:37:29.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.939 "is_configured": false, 00:37:29.939 "data_offset": 256, 00:37:29.939 "data_size": 7936 00:37:29.939 }, 00:37:29.939 { 00:37:29.939 "name": "BaseBdev2", 00:37:29.939 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:29.939 "is_configured": true, 00:37:29.939 "data_offset": 256, 00:37:29.939 "data_size": 7936 00:37:29.939 } 00:37:29.939 ] 00:37:29.939 }' 00:37:29.939 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:29.939 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:29.939 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:29.939 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:29.939 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:30.198 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:30.457 [2024-07-25 19:04:30.806396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:30.457 [2024-07-25 19:04:30.806686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:30.457 [2024-07-25 19:04:30.806763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:30.457 [2024-07-25 19:04:30.806872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:30.457 [2024-07-25 19:04:30.807181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:30.457 [2024-07-25 19:04:30.807293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:30.457 [2024-07-25 19:04:30.807457] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:30.457 [2024-07-25 19:04:30.807574] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:30.457 [2024-07-25 19:04:30.807676] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:30.457 BaseBdev1 00:37:30.457 19:04:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.395 19:04:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.654 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:31.654 "name": "raid_bdev1", 00:37:31.654 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:31.654 "strip_size_kb": 0, 00:37:31.654 "state": "online", 00:37:31.654 "raid_level": "raid1", 00:37:31.654 "superblock": true, 00:37:31.654 "num_base_bdevs": 2, 00:37:31.654 "num_base_bdevs_discovered": 1, 00:37:31.654 "num_base_bdevs_operational": 1, 00:37:31.654 "base_bdevs_list": [ 00:37:31.654 { 00:37:31.654 "name": null, 00:37:31.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.654 "is_configured": false, 00:37:31.654 "data_offset": 256, 00:37:31.654 "data_size": 7936 00:37:31.654 }, 00:37:31.654 { 00:37:31.654 "name": "BaseBdev2", 00:37:31.654 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:31.654 "is_configured": true, 00:37:31.654 "data_offset": 256, 00:37:31.654 "data_size": 7936 00:37:31.654 } 00:37:31.654 ] 00:37:31.654 }' 00:37:31.654 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:31.654 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:32.223 "name": "raid_bdev1", 00:37:32.223 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:32.223 "strip_size_kb": 0, 00:37:32.223 "state": "online", 00:37:32.223 "raid_level": "raid1", 00:37:32.223 "superblock": true, 00:37:32.223 "num_base_bdevs": 2, 00:37:32.223 "num_base_bdevs_discovered": 1, 00:37:32.223 "num_base_bdevs_operational": 1, 00:37:32.223 "base_bdevs_list": [ 00:37:32.223 { 00:37:32.223 "name": null, 00:37:32.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.223 "is_configured": false, 00:37:32.223 "data_offset": 256, 00:37:32.223 "data_size": 7936 00:37:32.223 }, 00:37:32.223 { 00:37:32.223 "name": "BaseBdev2", 00:37:32.223 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:32.223 "is_configured": true, 00:37:32.223 "data_offset": 256, 00:37:32.223 "data_size": 7936 00:37:32.223 } 00:37:32.223 ] 00:37:32.223 }' 00:37:32.223 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:32.482 19:04:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:32.741 [2024-07-25 19:04:33.158770] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:32.741 [2024-07-25 19:04:33.159145] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:32.741 [2024-07-25 19:04:33.159269] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:32.741 request: 00:37:32.741 { 00:37:32.741 "base_bdev": "BaseBdev1", 00:37:32.741 "raid_bdev": "raid_bdev1", 00:37:32.741 "method": "bdev_raid_add_base_bdev", 00:37:32.741 "req_id": 1 00:37:32.741 } 00:37:32.741 Got JSON-RPC error response 00:37:32.741 response: 00:37:32.741 { 00:37:32.741 "code": -22, 00:37:32.741 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:32.741 } 00:37:32.741 19:04:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:37:32.741 19:04:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:32.741 19:04:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:32.741 19:04:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:32.741 19:04:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.674 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.932 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:33.932 "name": "raid_bdev1", 00:37:33.932 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:33.932 "strip_size_kb": 0, 00:37:33.932 "state": "online", 00:37:33.932 "raid_level": "raid1", 00:37:33.932 "superblock": true, 00:37:33.932 "num_base_bdevs": 2, 00:37:33.932 "num_base_bdevs_discovered": 1, 00:37:33.932 "num_base_bdevs_operational": 1, 00:37:33.932 "base_bdevs_list": [ 00:37:33.932 { 00:37:33.932 "name": null, 00:37:33.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.932 "is_configured": false, 00:37:33.932 "data_offset": 256, 00:37:33.932 "data_size": 7936 00:37:33.932 }, 00:37:33.932 { 00:37:33.932 "name": "BaseBdev2", 00:37:33.932 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:33.932 "is_configured": true, 00:37:33.932 "data_offset": 256, 00:37:33.932 "data_size": 7936 00:37:33.932 } 00:37:33.932 ] 00:37:33.932 }' 00:37:33.932 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:33.932 19:04:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:34.500 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:34.500 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:34.500 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:34.500 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:34.500 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:34.500 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.500 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.759 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:34.759 "name": "raid_bdev1", 00:37:34.759 "uuid": "c1d4fefd-aa0e-4fff-ad94-4c5452ded449", 00:37:34.759 "strip_size_kb": 0, 00:37:34.759 "state": "online", 00:37:34.759 "raid_level": "raid1", 00:37:34.759 "superblock": true, 00:37:34.759 "num_base_bdevs": 2, 00:37:34.759 "num_base_bdevs_discovered": 1, 00:37:34.759 "num_base_bdevs_operational": 1, 00:37:34.759 "base_bdevs_list": [ 00:37:34.759 { 00:37:34.759 "name": null, 00:37:34.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:34.759 "is_configured": false, 00:37:34.759 "data_offset": 256, 00:37:34.759 "data_size": 7936 00:37:34.759 }, 00:37:34.759 { 00:37:34.759 "name": "BaseBdev2", 00:37:34.759 "uuid": "63021e3d-e5ad-5de2-a04d-0e1567820a8d", 00:37:34.759 "is_configured": true, 00:37:34.759 "data_offset": 256, 00:37:34.759 "data_size": 7936 00:37:34.759 } 00:37:34.759 ] 00:37:34.759 }' 00:37:34.759 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:34.759 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:35.017 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:35.017 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:35.017 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 160470 00:37:35.017 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 160470 ']' 00:37:35.017 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 160470 00:37:35.017 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:37:35.018 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:35.018 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 160470 00:37:35.018 killing process with pid 160470 00:37:35.018 Received shutdown signal, test time was about 60.000000 seconds 00:37:35.018 00:37:35.018 Latency(us) 00:37:35.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.018 =================================================================================================================== 00:37:35.018 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:35.018 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:35.018 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:35.018 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 160470' 00:37:35.018 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 160470 00:37:35.018 19:04:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 160470 00:37:35.018 [2024-07-25 19:04:35.407390] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:35.018 [2024-07-25 19:04:35.407521] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:35.018 [2024-07-25 19:04:35.407577] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:35.018 [2024-07-25 19:04:35.407628] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:37:35.276 [2024-07-25 19:04:35.754275] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:36.656 19:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:37:36.656 00:37:36.656 real 0m31.991s 00:37:36.656 user 0m48.575s 00:37:36.656 sys 0m4.758s 00:37:36.656 19:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:36.656 19:04:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:36.656 ************************************ 00:37:36.656 END TEST raid_rebuild_test_sb_md_separate 00:37:36.656 ************************************ 00:37:36.918 19:04:37 bdev_raid -- bdev/bdev_raid.sh@991 -- # base_malloc_params='-m 32 -i' 00:37:36.918 19:04:37 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:37:36.918 19:04:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:37:36.919 19:04:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:36.919 19:04:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:36.919 ************************************ 00:37:36.919 START TEST raid_state_function_test_sb_md_interleaved 00:37:36.919 ************************************ 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:37:36.919 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=161357 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 161357' 00:37:36.920 Process raid pid: 161357 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 161357 /var/tmp/spdk-raid.sock 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 161357 ']' 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:36.920 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:36.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:36.921 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:36.921 19:04:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.921 [2024-07-25 19:04:37.387701] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:36.921 [2024-07-25 19:04:37.388243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.186 [2024-07-25 19:04:37.577451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.444 [2024-07-25 19:04:37.783629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.444 [2024-07-25 19:04:37.976479] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:38.010 [2024-07-25 19:04:38.524275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:38.010 [2024-07-25 19:04:38.524611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:38.010 [2024-07-25 19:04:38.524737] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:38.010 [2024-07-25 19:04:38.524802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.010 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:38.269 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:38.269 "name": "Existed_Raid", 00:37:38.269 "uuid": "277347e4-1b81-48de-9c28-899f96295bc7", 00:37:38.269 "strip_size_kb": 0, 00:37:38.269 "state": "configuring", 00:37:38.269 "raid_level": "raid1", 00:37:38.269 "superblock": true, 00:37:38.269 "num_base_bdevs": 2, 00:37:38.269 "num_base_bdevs_discovered": 0, 00:37:38.269 "num_base_bdevs_operational": 2, 00:37:38.269 "base_bdevs_list": [ 00:37:38.269 { 00:37:38.269 "name": "BaseBdev1", 00:37:38.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:38.269 "is_configured": false, 00:37:38.269 "data_offset": 0, 00:37:38.269 "data_size": 0 00:37:38.269 }, 00:37:38.269 { 00:37:38.269 "name": "BaseBdev2", 00:37:38.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:38.269 "is_configured": false, 00:37:38.269 "data_offset": 0, 00:37:38.269 "data_size": 0 00:37:38.269 } 00:37:38.269 ] 00:37:38.269 }' 00:37:38.269 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:38.269 19:04:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:38.835 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:39.094 [2024-07-25 19:04:39.444298] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:39.094 [2024-07-25 19:04:39.444525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name Existed_Raid, state configuring 00:37:39.094 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:39.352 [2024-07-25 19:04:39.708383] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:39.352 [2024-07-25 19:04:39.708562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:39.352 [2024-07-25 19:04:39.708643] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:39.352 [2024-07-25 19:04:39.708695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:39.353 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:37:39.353 [2024-07-25 19:04:39.929401] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:39.353 BaseBdev1 00:37:39.611 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:37:39.611 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:37:39.611 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:39.611 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:37:39.611 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:39.611 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:39.611 19:04:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:39.611 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:39.870 [ 00:37:39.870 { 00:37:39.870 "name": "BaseBdev1", 00:37:39.870 "aliases": [ 00:37:39.870 "2004cda0-4824-4e00-a3de-2e3263efb849" 00:37:39.870 ], 00:37:39.870 "product_name": "Malloc disk", 00:37:39.870 "block_size": 4128, 00:37:39.870 "num_blocks": 8192, 00:37:39.870 "uuid": "2004cda0-4824-4e00-a3de-2e3263efb849", 00:37:39.870 "md_size": 32, 00:37:39.870 "md_interleave": true, 00:37:39.870 "dif_type": 0, 00:37:39.870 "assigned_rate_limits": { 00:37:39.870 "rw_ios_per_sec": 0, 00:37:39.870 "rw_mbytes_per_sec": 0, 00:37:39.870 "r_mbytes_per_sec": 0, 00:37:39.870 "w_mbytes_per_sec": 0 00:37:39.870 }, 00:37:39.870 "claimed": true, 00:37:39.870 "claim_type": "exclusive_write", 00:37:39.870 "zoned": false, 00:37:39.870 "supported_io_types": { 00:37:39.870 "read": true, 00:37:39.870 "write": true, 00:37:39.870 "unmap": true, 00:37:39.870 "flush": true, 00:37:39.870 "reset": true, 00:37:39.870 "nvme_admin": false, 00:37:39.870 "nvme_io": false, 00:37:39.870 "nvme_io_md": false, 00:37:39.870 "write_zeroes": true, 00:37:39.870 "zcopy": true, 00:37:39.870 "get_zone_info": false, 00:37:39.870 "zone_management": false, 00:37:39.870 "zone_append": false, 00:37:39.870 "compare": false, 00:37:39.870 "compare_and_write": false, 00:37:39.870 "abort": true, 00:37:39.870 "seek_hole": false, 00:37:39.870 "seek_data": false, 00:37:39.870 "copy": true, 00:37:39.870 "nvme_iov_md": false 00:37:39.870 }, 00:37:39.870 "memory_domains": [ 00:37:39.870 { 00:37:39.870 "dma_device_id": "system", 00:37:39.870 "dma_device_type": 1 00:37:39.870 }, 00:37:39.870 { 00:37:39.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:39.870 "dma_device_type": 2 00:37:39.870 } 00:37:39.870 ], 00:37:39.870 "driver_specific": {} 00:37:39.870 } 00:37:39.870 ] 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.870 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:40.129 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:40.129 "name": "Existed_Raid", 00:37:40.129 "uuid": "67042ae4-9e48-4475-9705-4953a450f9b7", 00:37:40.129 "strip_size_kb": 0, 00:37:40.129 "state": "configuring", 00:37:40.129 "raid_level": "raid1", 00:37:40.129 "superblock": true, 00:37:40.129 "num_base_bdevs": 2, 00:37:40.129 "num_base_bdevs_discovered": 1, 00:37:40.129 "num_base_bdevs_operational": 2, 00:37:40.129 "base_bdevs_list": [ 00:37:40.129 { 00:37:40.129 "name": "BaseBdev1", 00:37:40.129 "uuid": "2004cda0-4824-4e00-a3de-2e3263efb849", 00:37:40.129 "is_configured": true, 00:37:40.129 "data_offset": 256, 00:37:40.129 "data_size": 7936 00:37:40.129 }, 00:37:40.129 { 00:37:40.129 "name": "BaseBdev2", 00:37:40.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.129 "is_configured": false, 00:37:40.129 "data_offset": 0, 00:37:40.129 "data_size": 0 00:37:40.129 } 00:37:40.129 ] 00:37:40.129 }' 00:37:40.129 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:40.129 19:04:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:40.697 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:40.955 [2024-07-25 19:04:41.317664] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:40.955 [2024-07-25 19:04:41.317911] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name Existed_Raid, state configuring 00:37:40.955 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:41.214 [2024-07-25 19:04:41.593764] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:41.214 [2024-07-25 19:04:41.596172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:41.214 [2024-07-25 19:04:41.596353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:41.214 "name": "Existed_Raid", 00:37:41.214 "uuid": "4a974bca-169c-4d63-974d-f56782e38d55", 00:37:41.214 "strip_size_kb": 0, 00:37:41.214 "state": "configuring", 00:37:41.214 "raid_level": "raid1", 00:37:41.214 "superblock": true, 00:37:41.214 "num_base_bdevs": 2, 00:37:41.214 "num_base_bdevs_discovered": 1, 00:37:41.214 "num_base_bdevs_operational": 2, 00:37:41.214 "base_bdevs_list": [ 00:37:41.214 { 00:37:41.214 "name": "BaseBdev1", 00:37:41.214 "uuid": "2004cda0-4824-4e00-a3de-2e3263efb849", 00:37:41.214 "is_configured": true, 00:37:41.214 "data_offset": 256, 00:37:41.214 "data_size": 7936 00:37:41.214 }, 00:37:41.214 { 00:37:41.214 "name": "BaseBdev2", 00:37:41.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:41.214 "is_configured": false, 00:37:41.214 "data_offset": 0, 00:37:41.214 "data_size": 0 00:37:41.214 } 00:37:41.214 ] 00:37:41.214 }' 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:41.214 19:04:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:41.846 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:37:42.104 [2024-07-25 19:04:42.554899] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:42.104 [2024-07-25 19:04:42.555361] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:37:42.104 [2024-07-25 19:04:42.555493] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:42.104 [2024-07-25 19:04:42.555630] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:37:42.104 [2024-07-25 19:04:42.555928] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:37:42.104 [2024-07-25 19:04:42.555967] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000013100 00:37:42.104 BaseBdev2 00:37:42.104 [2024-07-25 19:04:42.556135] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:42.104 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:37:42.104 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:37:42.104 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:42.104 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:37:42.104 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:42.104 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:42.104 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:42.362 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:42.621 [ 00:37:42.621 { 00:37:42.621 "name": "BaseBdev2", 00:37:42.621 "aliases": [ 00:37:42.621 "694d699f-1739-4368-ba6c-9f114fcd5644" 00:37:42.621 ], 00:37:42.621 "product_name": "Malloc disk", 00:37:42.621 "block_size": 4128, 00:37:42.621 "num_blocks": 8192, 00:37:42.621 "uuid": "694d699f-1739-4368-ba6c-9f114fcd5644", 00:37:42.621 "md_size": 32, 00:37:42.621 "md_interleave": true, 00:37:42.621 "dif_type": 0, 00:37:42.621 "assigned_rate_limits": { 00:37:42.621 "rw_ios_per_sec": 0, 00:37:42.621 "rw_mbytes_per_sec": 0, 00:37:42.621 "r_mbytes_per_sec": 0, 00:37:42.621 "w_mbytes_per_sec": 0 00:37:42.621 }, 00:37:42.621 "claimed": true, 00:37:42.621 "claim_type": "exclusive_write", 00:37:42.621 "zoned": false, 00:37:42.621 "supported_io_types": { 00:37:42.621 "read": true, 00:37:42.621 "write": true, 00:37:42.621 "unmap": true, 00:37:42.621 "flush": true, 00:37:42.621 "reset": true, 00:37:42.621 "nvme_admin": false, 00:37:42.621 "nvme_io": false, 00:37:42.621 "nvme_io_md": false, 00:37:42.621 "write_zeroes": true, 00:37:42.621 "zcopy": true, 00:37:42.621 "get_zone_info": false, 00:37:42.621 "zone_management": false, 00:37:42.621 "zone_append": false, 00:37:42.621 "compare": false, 00:37:42.621 "compare_and_write": false, 00:37:42.621 "abort": true, 00:37:42.621 "seek_hole": false, 00:37:42.621 "seek_data": false, 00:37:42.621 "copy": true, 00:37:42.621 "nvme_iov_md": false 00:37:42.621 }, 00:37:42.621 "memory_domains": [ 00:37:42.621 { 00:37:42.621 "dma_device_id": "system", 00:37:42.621 "dma_device_type": 1 00:37:42.621 }, 00:37:42.621 { 00:37:42.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:42.621 "dma_device_type": 2 00:37:42.621 } 00:37:42.621 ], 00:37:42.621 "driver_specific": {} 00:37:42.621 } 00:37:42.621 ] 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:42.621 19:04:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:42.621 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:42.621 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.880 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:42.880 "name": "Existed_Raid", 00:37:42.880 "uuid": "4a974bca-169c-4d63-974d-f56782e38d55", 00:37:42.880 "strip_size_kb": 0, 00:37:42.880 "state": "online", 00:37:42.880 "raid_level": "raid1", 00:37:42.880 "superblock": true, 00:37:42.880 "num_base_bdevs": 2, 00:37:42.880 "num_base_bdevs_discovered": 2, 00:37:42.880 "num_base_bdevs_operational": 2, 00:37:42.880 "base_bdevs_list": [ 00:37:42.880 { 00:37:42.880 "name": "BaseBdev1", 00:37:42.880 "uuid": "2004cda0-4824-4e00-a3de-2e3263efb849", 00:37:42.880 "is_configured": true, 00:37:42.880 "data_offset": 256, 00:37:42.880 "data_size": 7936 00:37:42.880 }, 00:37:42.880 { 00:37:42.880 "name": "BaseBdev2", 00:37:42.880 "uuid": "694d699f-1739-4368-ba6c-9f114fcd5644", 00:37:42.880 "is_configured": true, 00:37:42.880 "data_offset": 256, 00:37:42.880 "data_size": 7936 00:37:42.880 } 00:37:42.880 ] 00:37:42.880 }' 00:37:42.880 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:42.880 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:43.137 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:37:43.137 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:43.137 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:43.137 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:43.138 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:43.138 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:37:43.138 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:43.138 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:43.395 [2024-07-25 19:04:43.859368] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:43.395 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:43.395 "name": "Existed_Raid", 00:37:43.395 "aliases": [ 00:37:43.395 "4a974bca-169c-4d63-974d-f56782e38d55" 00:37:43.395 ], 00:37:43.395 "product_name": "Raid Volume", 00:37:43.395 "block_size": 4128, 00:37:43.395 "num_blocks": 7936, 00:37:43.396 "uuid": "4a974bca-169c-4d63-974d-f56782e38d55", 00:37:43.396 "md_size": 32, 00:37:43.396 "md_interleave": true, 00:37:43.396 "dif_type": 0, 00:37:43.396 "assigned_rate_limits": { 00:37:43.396 "rw_ios_per_sec": 0, 00:37:43.396 "rw_mbytes_per_sec": 0, 00:37:43.396 "r_mbytes_per_sec": 0, 00:37:43.396 "w_mbytes_per_sec": 0 00:37:43.396 }, 00:37:43.396 "claimed": false, 00:37:43.396 "zoned": false, 00:37:43.396 "supported_io_types": { 00:37:43.396 "read": true, 00:37:43.396 "write": true, 00:37:43.396 "unmap": false, 00:37:43.396 "flush": false, 00:37:43.396 "reset": true, 00:37:43.396 "nvme_admin": false, 00:37:43.396 "nvme_io": false, 00:37:43.396 "nvme_io_md": false, 00:37:43.396 "write_zeroes": true, 00:37:43.396 "zcopy": false, 00:37:43.396 "get_zone_info": false, 00:37:43.396 "zone_management": false, 00:37:43.396 "zone_append": false, 00:37:43.396 "compare": false, 00:37:43.396 "compare_and_write": false, 00:37:43.396 "abort": false, 00:37:43.396 "seek_hole": false, 00:37:43.396 "seek_data": false, 00:37:43.396 "copy": false, 00:37:43.396 "nvme_iov_md": false 00:37:43.396 }, 00:37:43.396 "memory_domains": [ 00:37:43.396 { 00:37:43.396 "dma_device_id": "system", 00:37:43.396 "dma_device_type": 1 00:37:43.396 }, 00:37:43.396 { 00:37:43.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:43.396 "dma_device_type": 2 00:37:43.396 }, 00:37:43.396 { 00:37:43.396 "dma_device_id": "system", 00:37:43.396 "dma_device_type": 1 00:37:43.396 }, 00:37:43.396 { 00:37:43.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:43.396 "dma_device_type": 2 00:37:43.396 } 00:37:43.396 ], 00:37:43.396 "driver_specific": { 00:37:43.396 "raid": { 00:37:43.396 "uuid": "4a974bca-169c-4d63-974d-f56782e38d55", 00:37:43.396 "strip_size_kb": 0, 00:37:43.396 "state": "online", 00:37:43.396 "raid_level": "raid1", 00:37:43.396 "superblock": true, 00:37:43.396 "num_base_bdevs": 2, 00:37:43.396 "num_base_bdevs_discovered": 2, 00:37:43.396 "num_base_bdevs_operational": 2, 00:37:43.396 "base_bdevs_list": [ 00:37:43.396 { 00:37:43.396 "name": "BaseBdev1", 00:37:43.396 "uuid": "2004cda0-4824-4e00-a3de-2e3263efb849", 00:37:43.396 "is_configured": true, 00:37:43.396 "data_offset": 256, 00:37:43.396 "data_size": 7936 00:37:43.396 }, 00:37:43.396 { 00:37:43.396 "name": "BaseBdev2", 00:37:43.396 "uuid": "694d699f-1739-4368-ba6c-9f114fcd5644", 00:37:43.396 "is_configured": true, 00:37:43.396 "data_offset": 256, 00:37:43.396 "data_size": 7936 00:37:43.396 } 00:37:43.396 ] 00:37:43.396 } 00:37:43.396 } 00:37:43.396 }' 00:37:43.396 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:43.396 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:37:43.396 BaseBdev2' 00:37:43.396 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:43.396 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:37:43.396 19:04:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:43.654 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:43.654 "name": "BaseBdev1", 00:37:43.654 "aliases": [ 00:37:43.654 "2004cda0-4824-4e00-a3de-2e3263efb849" 00:37:43.654 ], 00:37:43.654 "product_name": "Malloc disk", 00:37:43.654 "block_size": 4128, 00:37:43.654 "num_blocks": 8192, 00:37:43.654 "uuid": "2004cda0-4824-4e00-a3de-2e3263efb849", 00:37:43.654 "md_size": 32, 00:37:43.654 "md_interleave": true, 00:37:43.654 "dif_type": 0, 00:37:43.654 "assigned_rate_limits": { 00:37:43.654 "rw_ios_per_sec": 0, 00:37:43.654 "rw_mbytes_per_sec": 0, 00:37:43.654 "r_mbytes_per_sec": 0, 00:37:43.654 "w_mbytes_per_sec": 0 00:37:43.654 }, 00:37:43.654 "claimed": true, 00:37:43.654 "claim_type": "exclusive_write", 00:37:43.654 "zoned": false, 00:37:43.654 "supported_io_types": { 00:37:43.654 "read": true, 00:37:43.654 "write": true, 00:37:43.654 "unmap": true, 00:37:43.654 "flush": true, 00:37:43.654 "reset": true, 00:37:43.654 "nvme_admin": false, 00:37:43.654 "nvme_io": false, 00:37:43.654 "nvme_io_md": false, 00:37:43.654 "write_zeroes": true, 00:37:43.655 "zcopy": true, 00:37:43.655 "get_zone_info": false, 00:37:43.655 "zone_management": false, 00:37:43.655 "zone_append": false, 00:37:43.655 "compare": false, 00:37:43.655 "compare_and_write": false, 00:37:43.655 "abort": true, 00:37:43.655 "seek_hole": false, 00:37:43.655 "seek_data": false, 00:37:43.655 "copy": true, 00:37:43.655 "nvme_iov_md": false 00:37:43.655 }, 00:37:43.655 "memory_domains": [ 00:37:43.655 { 00:37:43.655 "dma_device_id": "system", 00:37:43.655 "dma_device_type": 1 00:37:43.655 }, 00:37:43.655 { 00:37:43.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:43.655 "dma_device_type": 2 00:37:43.655 } 00:37:43.655 ], 00:37:43.655 "driver_specific": {} 00:37:43.655 }' 00:37:43.655 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:43.655 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:43.913 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:44.172 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:44.172 "name": "BaseBdev2", 00:37:44.172 "aliases": [ 00:37:44.172 "694d699f-1739-4368-ba6c-9f114fcd5644" 00:37:44.172 ], 00:37:44.172 "product_name": "Malloc disk", 00:37:44.172 "block_size": 4128, 00:37:44.172 "num_blocks": 8192, 00:37:44.172 "uuid": "694d699f-1739-4368-ba6c-9f114fcd5644", 00:37:44.172 "md_size": 32, 00:37:44.172 "md_interleave": true, 00:37:44.172 "dif_type": 0, 00:37:44.172 "assigned_rate_limits": { 00:37:44.172 "rw_ios_per_sec": 0, 00:37:44.172 "rw_mbytes_per_sec": 0, 00:37:44.172 "r_mbytes_per_sec": 0, 00:37:44.172 "w_mbytes_per_sec": 0 00:37:44.172 }, 00:37:44.172 "claimed": true, 00:37:44.172 "claim_type": "exclusive_write", 00:37:44.172 "zoned": false, 00:37:44.172 "supported_io_types": { 00:37:44.172 "read": true, 00:37:44.172 "write": true, 00:37:44.172 "unmap": true, 00:37:44.172 "flush": true, 00:37:44.172 "reset": true, 00:37:44.172 "nvme_admin": false, 00:37:44.172 "nvme_io": false, 00:37:44.172 "nvme_io_md": false, 00:37:44.172 "write_zeroes": true, 00:37:44.172 "zcopy": true, 00:37:44.172 "get_zone_info": false, 00:37:44.172 "zone_management": false, 00:37:44.172 "zone_append": false, 00:37:44.172 "compare": false, 00:37:44.172 "compare_and_write": false, 00:37:44.172 "abort": true, 00:37:44.172 "seek_hole": false, 00:37:44.172 "seek_data": false, 00:37:44.172 "copy": true, 00:37:44.172 "nvme_iov_md": false 00:37:44.172 }, 00:37:44.172 "memory_domains": [ 00:37:44.172 { 00:37:44.172 "dma_device_id": "system", 00:37:44.172 "dma_device_type": 1 00:37:44.172 }, 00:37:44.172 { 00:37:44.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:44.172 "dma_device_type": 2 00:37:44.172 } 00:37:44.172 ], 00:37:44.172 "driver_specific": {} 00:37:44.172 }' 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:44.432 19:04:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:44.691 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:44.691 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:44.691 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:44.691 [2024-07-25 19:04:45.235457] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.951 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:45.210 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:45.210 "name": "Existed_Raid", 00:37:45.210 "uuid": "4a974bca-169c-4d63-974d-f56782e38d55", 00:37:45.210 "strip_size_kb": 0, 00:37:45.210 "state": "online", 00:37:45.210 "raid_level": "raid1", 00:37:45.210 "superblock": true, 00:37:45.210 "num_base_bdevs": 2, 00:37:45.210 "num_base_bdevs_discovered": 1, 00:37:45.210 "num_base_bdevs_operational": 1, 00:37:45.210 "base_bdevs_list": [ 00:37:45.210 { 00:37:45.210 "name": null, 00:37:45.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:45.210 "is_configured": false, 00:37:45.210 "data_offset": 256, 00:37:45.210 "data_size": 7936 00:37:45.210 }, 00:37:45.210 { 00:37:45.210 "name": "BaseBdev2", 00:37:45.210 "uuid": "694d699f-1739-4368-ba6c-9f114fcd5644", 00:37:45.210 "is_configured": true, 00:37:45.210 "data_offset": 256, 00:37:45.210 "data_size": 7936 00:37:45.210 } 00:37:45.210 ] 00:37:45.210 }' 00:37:45.210 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:45.210 19:04:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:45.779 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:37:45.779 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:45.779 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:45.779 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:46.039 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:46.039 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:46.039 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:46.039 [2024-07-25 19:04:46.600695] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:46.039 [2024-07-25 19:04:46.600931] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:46.298 [2024-07-25 19:04:46.684863] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:46.298 [2024-07-25 19:04:46.686153] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:46.298 [2024-07-25 19:04:46.686548] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name Existed_Raid, state offline 00:37:46.298 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:46.298 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:46.298 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.298 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 161357 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 161357 ']' 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 161357 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161357 00:37:46.558 killing process with pid 161357 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161357' 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 161357 00:37:46.558 [2024-07-25 19:04:46.949536] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:46.558 19:04:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 161357 00:37:46.558 [2024-07-25 19:04:46.949639] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:47.496 ************************************ 00:37:47.496 END TEST raid_state_function_test_sb_md_interleaved 00:37:47.496 ************************************ 00:37:47.496 19:04:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:37:47.496 00:37:47.496 real 0m10.704s 00:37:47.496 user 0m18.009s 00:37:47.496 sys 0m1.906s 00:37:47.496 19:04:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:47.496 19:04:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:47.496 19:04:48 bdev_raid -- bdev/bdev_raid.sh@993 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:37:47.496 19:04:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:47.496 19:04:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:47.496 19:04:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:47.496 ************************************ 00:37:47.496 START TEST raid_superblock_test_md_interleaved 00:37:47.496 ************************************ 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:37:47.496 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=161721 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 161721 /var/tmp/spdk-raid.sock 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 161721 ']' 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:47.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:47.756 19:04:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:47.756 [2024-07-25 19:04:48.158282] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:47.756 [2024-07-25 19:04:48.158532] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161721 ] 00:37:48.015 [2024-07-25 19:04:48.348523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.275 [2024-07-25 19:04:48.621381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.275 [2024-07-25 19:04:48.811036] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:48.534 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:37:48.794 malloc1 00:37:48.794 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:49.053 [2024-07-25 19:04:49.498391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:49.053 [2024-07-25 19:04:49.498492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:49.053 [2024-07-25 19:04:49.498534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:37:49.053 [2024-07-25 19:04:49.498555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:49.053 [2024-07-25 19:04:49.500818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:49.053 [2024-07-25 19:04:49.500881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:49.053 pt1 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:49.053 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:37:49.312 malloc2 00:37:49.312 19:04:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:49.570 [2024-07-25 19:04:50.017855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:49.570 [2024-07-25 19:04:50.017978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:49.570 [2024-07-25 19:04:50.018017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:37:49.570 [2024-07-25 19:04:50.018057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:49.570 [2024-07-25 19:04:50.020473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:49.570 [2024-07-25 19:04:50.020519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:49.570 pt2 00:37:49.570 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:37:49.570 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:37:49.570 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:37:49.830 [2024-07-25 19:04:50.185912] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:49.830 [2024-07-25 19:04:50.188055] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:49.830 [2024-07-25 19:04:50.188257] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:37:49.830 [2024-07-25 19:04:50.188267] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:49.830 [2024-07-25 19:04:50.188367] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:37:49.830 [2024-07-25 19:04:50.188428] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:37:49.830 [2024-07-25 19:04:50.188436] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:37:49.830 [2024-07-25 19:04:50.188503] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:49.830 "name": "raid_bdev1", 00:37:49.830 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:49.830 "strip_size_kb": 0, 00:37:49.830 "state": "online", 00:37:49.830 "raid_level": "raid1", 00:37:49.830 "superblock": true, 00:37:49.830 "num_base_bdevs": 2, 00:37:49.830 "num_base_bdevs_discovered": 2, 00:37:49.830 "num_base_bdevs_operational": 2, 00:37:49.830 "base_bdevs_list": [ 00:37:49.830 { 00:37:49.830 "name": "pt1", 00:37:49.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:49.830 "is_configured": true, 00:37:49.830 "data_offset": 256, 00:37:49.830 "data_size": 7936 00:37:49.830 }, 00:37:49.830 { 00:37:49.830 "name": "pt2", 00:37:49.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:49.830 "is_configured": true, 00:37:49.830 "data_offset": 256, 00:37:49.830 "data_size": 7936 00:37:49.830 } 00:37:49.830 ] 00:37:49.830 }' 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:49.830 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:50.399 19:04:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:50.659 [2024-07-25 19:04:51.082616] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:50.659 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:50.659 "name": "raid_bdev1", 00:37:50.659 "aliases": [ 00:37:50.659 "e4c0e087-f32a-4553-9644-be93a0dbf648" 00:37:50.659 ], 00:37:50.659 "product_name": "Raid Volume", 00:37:50.659 "block_size": 4128, 00:37:50.659 "num_blocks": 7936, 00:37:50.659 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:50.659 "md_size": 32, 00:37:50.659 "md_interleave": true, 00:37:50.659 "dif_type": 0, 00:37:50.659 "assigned_rate_limits": { 00:37:50.659 "rw_ios_per_sec": 0, 00:37:50.659 "rw_mbytes_per_sec": 0, 00:37:50.659 "r_mbytes_per_sec": 0, 00:37:50.659 "w_mbytes_per_sec": 0 00:37:50.659 }, 00:37:50.659 "claimed": false, 00:37:50.659 "zoned": false, 00:37:50.659 "supported_io_types": { 00:37:50.659 "read": true, 00:37:50.659 "write": true, 00:37:50.659 "unmap": false, 00:37:50.659 "flush": false, 00:37:50.659 "reset": true, 00:37:50.659 "nvme_admin": false, 00:37:50.659 "nvme_io": false, 00:37:50.659 "nvme_io_md": false, 00:37:50.659 "write_zeroes": true, 00:37:50.659 "zcopy": false, 00:37:50.659 "get_zone_info": false, 00:37:50.659 "zone_management": false, 00:37:50.659 "zone_append": false, 00:37:50.659 "compare": false, 00:37:50.659 "compare_and_write": false, 00:37:50.659 "abort": false, 00:37:50.659 "seek_hole": false, 00:37:50.659 "seek_data": false, 00:37:50.659 "copy": false, 00:37:50.659 "nvme_iov_md": false 00:37:50.659 }, 00:37:50.659 "memory_domains": [ 00:37:50.659 { 00:37:50.659 "dma_device_id": "system", 00:37:50.659 "dma_device_type": 1 00:37:50.659 }, 00:37:50.659 { 00:37:50.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:50.659 "dma_device_type": 2 00:37:50.659 }, 00:37:50.659 { 00:37:50.659 "dma_device_id": "system", 00:37:50.659 "dma_device_type": 1 00:37:50.659 }, 00:37:50.659 { 00:37:50.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:50.659 "dma_device_type": 2 00:37:50.659 } 00:37:50.659 ], 00:37:50.659 "driver_specific": { 00:37:50.659 "raid": { 00:37:50.659 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:50.659 "strip_size_kb": 0, 00:37:50.659 "state": "online", 00:37:50.659 "raid_level": "raid1", 00:37:50.659 "superblock": true, 00:37:50.659 "num_base_bdevs": 2, 00:37:50.659 "num_base_bdevs_discovered": 2, 00:37:50.659 "num_base_bdevs_operational": 2, 00:37:50.659 "base_bdevs_list": [ 00:37:50.659 { 00:37:50.659 "name": "pt1", 00:37:50.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:50.659 "is_configured": true, 00:37:50.659 "data_offset": 256, 00:37:50.659 "data_size": 7936 00:37:50.659 }, 00:37:50.659 { 00:37:50.659 "name": "pt2", 00:37:50.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:50.659 "is_configured": true, 00:37:50.659 "data_offset": 256, 00:37:50.659 "data_size": 7936 00:37:50.659 } 00:37:50.659 ] 00:37:50.659 } 00:37:50.659 } 00:37:50.659 }' 00:37:50.659 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:50.659 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:50.659 pt2' 00:37:50.659 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:50.659 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:50.659 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:50.918 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:50.918 "name": "pt1", 00:37:50.918 "aliases": [ 00:37:50.918 "00000000-0000-0000-0000-000000000001" 00:37:50.918 ], 00:37:50.918 "product_name": "passthru", 00:37:50.918 "block_size": 4128, 00:37:50.918 "num_blocks": 8192, 00:37:50.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:50.918 "md_size": 32, 00:37:50.918 "md_interleave": true, 00:37:50.918 "dif_type": 0, 00:37:50.918 "assigned_rate_limits": { 00:37:50.919 "rw_ios_per_sec": 0, 00:37:50.919 "rw_mbytes_per_sec": 0, 00:37:50.919 "r_mbytes_per_sec": 0, 00:37:50.919 "w_mbytes_per_sec": 0 00:37:50.919 }, 00:37:50.919 "claimed": true, 00:37:50.919 "claim_type": "exclusive_write", 00:37:50.919 "zoned": false, 00:37:50.919 "supported_io_types": { 00:37:50.919 "read": true, 00:37:50.919 "write": true, 00:37:50.919 "unmap": true, 00:37:50.919 "flush": true, 00:37:50.919 "reset": true, 00:37:50.919 "nvme_admin": false, 00:37:50.919 "nvme_io": false, 00:37:50.919 "nvme_io_md": false, 00:37:50.919 "write_zeroes": true, 00:37:50.919 "zcopy": true, 00:37:50.919 "get_zone_info": false, 00:37:50.919 "zone_management": false, 00:37:50.919 "zone_append": false, 00:37:50.919 "compare": false, 00:37:50.919 "compare_and_write": false, 00:37:50.919 "abort": true, 00:37:50.919 "seek_hole": false, 00:37:50.919 "seek_data": false, 00:37:50.919 "copy": true, 00:37:50.919 "nvme_iov_md": false 00:37:50.919 }, 00:37:50.919 "memory_domains": [ 00:37:50.919 { 00:37:50.919 "dma_device_id": "system", 00:37:50.919 "dma_device_type": 1 00:37:50.919 }, 00:37:50.919 { 00:37:50.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:50.919 "dma_device_type": 2 00:37:50.919 } 00:37:50.919 ], 00:37:50.919 "driver_specific": { 00:37:50.919 "passthru": { 00:37:50.919 "name": "pt1", 00:37:50.919 "base_bdev_name": "malloc1" 00:37:50.919 } 00:37:50.919 } 00:37:50.919 }' 00:37:50.919 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:50.919 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:50.919 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:50.919 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:51.178 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:51.437 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:51.437 "name": "pt2", 00:37:51.437 "aliases": [ 00:37:51.437 "00000000-0000-0000-0000-000000000002" 00:37:51.437 ], 00:37:51.437 "product_name": "passthru", 00:37:51.437 "block_size": 4128, 00:37:51.437 "num_blocks": 8192, 00:37:51.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:51.437 "md_size": 32, 00:37:51.437 "md_interleave": true, 00:37:51.437 "dif_type": 0, 00:37:51.437 "assigned_rate_limits": { 00:37:51.437 "rw_ios_per_sec": 0, 00:37:51.437 "rw_mbytes_per_sec": 0, 00:37:51.437 "r_mbytes_per_sec": 0, 00:37:51.437 "w_mbytes_per_sec": 0 00:37:51.437 }, 00:37:51.437 "claimed": true, 00:37:51.437 "claim_type": "exclusive_write", 00:37:51.437 "zoned": false, 00:37:51.437 "supported_io_types": { 00:37:51.437 "read": true, 00:37:51.437 "write": true, 00:37:51.437 "unmap": true, 00:37:51.437 "flush": true, 00:37:51.437 "reset": true, 00:37:51.437 "nvme_admin": false, 00:37:51.437 "nvme_io": false, 00:37:51.437 "nvme_io_md": false, 00:37:51.437 "write_zeroes": true, 00:37:51.437 "zcopy": true, 00:37:51.437 "get_zone_info": false, 00:37:51.437 "zone_management": false, 00:37:51.437 "zone_append": false, 00:37:51.437 "compare": false, 00:37:51.437 "compare_and_write": false, 00:37:51.437 "abort": true, 00:37:51.437 "seek_hole": false, 00:37:51.437 "seek_data": false, 00:37:51.437 "copy": true, 00:37:51.437 "nvme_iov_md": false 00:37:51.437 }, 00:37:51.437 "memory_domains": [ 00:37:51.437 { 00:37:51.437 "dma_device_id": "system", 00:37:51.437 "dma_device_type": 1 00:37:51.437 }, 00:37:51.437 { 00:37:51.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:51.437 "dma_device_type": 2 00:37:51.437 } 00:37:51.437 ], 00:37:51.437 "driver_specific": { 00:37:51.437 "passthru": { 00:37:51.437 "name": "pt2", 00:37:51.437 "base_bdev_name": "malloc2" 00:37:51.437 } 00:37:51.437 } 00:37:51.437 }' 00:37:51.437 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:51.437 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:51.437 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:51.437 19:04:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:51.697 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:37:51.956 [2024-07-25 19:04:52.466803] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:51.956 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=e4c0e087-f32a-4553-9644-be93a0dbf648 00:37:51.956 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z e4c0e087-f32a-4553-9644-be93a0dbf648 ']' 00:37:51.956 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:52.216 [2024-07-25 19:04:52.646628] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:52.216 [2024-07-25 19:04:52.646655] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:52.216 [2024-07-25 19:04:52.646750] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:52.216 [2024-07-25 19:04:52.646814] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:52.216 [2024-07-25 19:04:52.646823] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:37:52.216 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:52.216 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:37:52.475 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:37:52.475 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:37:52.475 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:37:52.475 19:04:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:52.734 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:37:52.734 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:52.734 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:52.734 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:52.992 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:53.249 [2024-07-25 19:04:53.738786] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:53.249 [2024-07-25 19:04:53.741153] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:53.249 [2024-07-25 19:04:53.741346] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:53.249 [2024-07-25 19:04:53.741547] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:53.249 [2024-07-25 19:04:53.741669] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:53.249 [2024-07-25 19:04:53.741704] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state configuring 00:37:53.249 request: 00:37:53.249 { 00:37:53.249 "name": "raid_bdev1", 00:37:53.249 "raid_level": "raid1", 00:37:53.249 "base_bdevs": [ 00:37:53.249 "malloc1", 00:37:53.249 "malloc2" 00:37:53.249 ], 00:37:53.249 "superblock": false, 00:37:53.249 "method": "bdev_raid_create", 00:37:53.249 "req_id": 1 00:37:53.249 } 00:37:53.249 Got JSON-RPC error response 00:37:53.249 response: 00:37:53.249 { 00:37:53.249 "code": -17, 00:37:53.249 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:53.249 } 00:37:53.249 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:37:53.249 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:53.249 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:53.249 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:53.249 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:53.249 19:04:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:37:53.507 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:37:53.507 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:37:53.507 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:53.766 [2024-07-25 19:04:54.170815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:53.766 [2024-07-25 19:04:54.171011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:53.766 [2024-07-25 19:04:54.171075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:53.767 [2024-07-25 19:04:54.171169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:53.767 [2024-07-25 19:04:54.173496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:53.767 [2024-07-25 19:04:54.173699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:53.767 [2024-07-25 19:04:54.173864] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:53.767 [2024-07-25 19:04:54.174045] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:53.767 pt1 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:53.767 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.025 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:54.025 "name": "raid_bdev1", 00:37:54.025 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:54.025 "strip_size_kb": 0, 00:37:54.025 "state": "configuring", 00:37:54.025 "raid_level": "raid1", 00:37:54.025 "superblock": true, 00:37:54.025 "num_base_bdevs": 2, 00:37:54.025 "num_base_bdevs_discovered": 1, 00:37:54.025 "num_base_bdevs_operational": 2, 00:37:54.025 "base_bdevs_list": [ 00:37:54.025 { 00:37:54.025 "name": "pt1", 00:37:54.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:54.025 "is_configured": true, 00:37:54.025 "data_offset": 256, 00:37:54.025 "data_size": 7936 00:37:54.025 }, 00:37:54.025 { 00:37:54.025 "name": null, 00:37:54.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:54.025 "is_configured": false, 00:37:54.025 "data_offset": 256, 00:37:54.025 "data_size": 7936 00:37:54.025 } 00:37:54.025 ] 00:37:54.025 }' 00:37:54.025 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:54.025 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:54.593 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:37:54.593 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:37:54.593 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:37:54.593 19:04:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:54.593 [2024-07-25 19:04:55.078964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:54.593 [2024-07-25 19:04:55.079182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:54.593 [2024-07-25 19:04:55.079250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:54.593 [2024-07-25 19:04:55.079341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:54.593 [2024-07-25 19:04:55.079555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:54.593 [2024-07-25 19:04:55.079741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:54.593 [2024-07-25 19:04:55.079835] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:54.593 [2024-07-25 19:04:55.079903] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:54.593 [2024-07-25 19:04:55.080234] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013100 00:37:54.593 [2024-07-25 19:04:55.080275] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:54.593 [2024-07-25 19:04:55.080369] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:37:54.593 [2024-07-25 19:04:55.080681] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013100 00:37:54.593 [2024-07-25 19:04:55.080720] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013100 00:37:54.593 [2024-07-25 19:04:55.080843] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:54.593 pt2 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:54.593 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.852 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:54.852 "name": "raid_bdev1", 00:37:54.852 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:54.852 "strip_size_kb": 0, 00:37:54.852 "state": "online", 00:37:54.852 "raid_level": "raid1", 00:37:54.852 "superblock": true, 00:37:54.852 "num_base_bdevs": 2, 00:37:54.852 "num_base_bdevs_discovered": 2, 00:37:54.852 "num_base_bdevs_operational": 2, 00:37:54.852 "base_bdevs_list": [ 00:37:54.852 { 00:37:54.852 "name": "pt1", 00:37:54.852 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:54.852 "is_configured": true, 00:37:54.852 "data_offset": 256, 00:37:54.852 "data_size": 7936 00:37:54.852 }, 00:37:54.852 { 00:37:54.852 "name": "pt2", 00:37:54.852 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:54.852 "is_configured": true, 00:37:54.852 "data_offset": 256, 00:37:54.852 "data_size": 7936 00:37:54.852 } 00:37:54.852 ] 00:37:54.852 }' 00:37:54.852 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:54.852 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:55.418 [2024-07-25 19:04:55.967342] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:55.418 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:55.418 "name": "raid_bdev1", 00:37:55.418 "aliases": [ 00:37:55.418 "e4c0e087-f32a-4553-9644-be93a0dbf648" 00:37:55.418 ], 00:37:55.418 "product_name": "Raid Volume", 00:37:55.418 "block_size": 4128, 00:37:55.418 "num_blocks": 7936, 00:37:55.418 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:55.418 "md_size": 32, 00:37:55.418 "md_interleave": true, 00:37:55.418 "dif_type": 0, 00:37:55.418 "assigned_rate_limits": { 00:37:55.418 "rw_ios_per_sec": 0, 00:37:55.418 "rw_mbytes_per_sec": 0, 00:37:55.418 "r_mbytes_per_sec": 0, 00:37:55.418 "w_mbytes_per_sec": 0 00:37:55.418 }, 00:37:55.418 "claimed": false, 00:37:55.418 "zoned": false, 00:37:55.418 "supported_io_types": { 00:37:55.418 "read": true, 00:37:55.418 "write": true, 00:37:55.418 "unmap": false, 00:37:55.418 "flush": false, 00:37:55.418 "reset": true, 00:37:55.418 "nvme_admin": false, 00:37:55.418 "nvme_io": false, 00:37:55.418 "nvme_io_md": false, 00:37:55.418 "write_zeroes": true, 00:37:55.418 "zcopy": false, 00:37:55.418 "get_zone_info": false, 00:37:55.418 "zone_management": false, 00:37:55.418 "zone_append": false, 00:37:55.418 "compare": false, 00:37:55.418 "compare_and_write": false, 00:37:55.418 "abort": false, 00:37:55.418 "seek_hole": false, 00:37:55.418 "seek_data": false, 00:37:55.418 "copy": false, 00:37:55.418 "nvme_iov_md": false 00:37:55.418 }, 00:37:55.418 "memory_domains": [ 00:37:55.418 { 00:37:55.418 "dma_device_id": "system", 00:37:55.418 "dma_device_type": 1 00:37:55.418 }, 00:37:55.418 { 00:37:55.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:55.418 "dma_device_type": 2 00:37:55.418 }, 00:37:55.418 { 00:37:55.418 "dma_device_id": "system", 00:37:55.418 "dma_device_type": 1 00:37:55.418 }, 00:37:55.418 { 00:37:55.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:55.418 "dma_device_type": 2 00:37:55.418 } 00:37:55.418 ], 00:37:55.418 "driver_specific": { 00:37:55.418 "raid": { 00:37:55.418 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:55.418 "strip_size_kb": 0, 00:37:55.418 "state": "online", 00:37:55.418 "raid_level": "raid1", 00:37:55.418 "superblock": true, 00:37:55.418 "num_base_bdevs": 2, 00:37:55.418 "num_base_bdevs_discovered": 2, 00:37:55.418 "num_base_bdevs_operational": 2, 00:37:55.418 "base_bdevs_list": [ 00:37:55.418 { 00:37:55.418 "name": "pt1", 00:37:55.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:55.418 "is_configured": true, 00:37:55.418 "data_offset": 256, 00:37:55.418 "data_size": 7936 00:37:55.418 }, 00:37:55.418 { 00:37:55.418 "name": "pt2", 00:37:55.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:55.418 "is_configured": true, 00:37:55.418 "data_offset": 256, 00:37:55.418 "data_size": 7936 00:37:55.418 } 00:37:55.418 ] 00:37:55.418 } 00:37:55.418 } 00:37:55.419 }' 00:37:55.419 19:04:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:55.677 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:55.677 pt2' 00:37:55.677 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:55.677 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:55.677 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:55.936 "name": "pt1", 00:37:55.936 "aliases": [ 00:37:55.936 "00000000-0000-0000-0000-000000000001" 00:37:55.936 ], 00:37:55.936 "product_name": "passthru", 00:37:55.936 "block_size": 4128, 00:37:55.936 "num_blocks": 8192, 00:37:55.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:55.936 "md_size": 32, 00:37:55.936 "md_interleave": true, 00:37:55.936 "dif_type": 0, 00:37:55.936 "assigned_rate_limits": { 00:37:55.936 "rw_ios_per_sec": 0, 00:37:55.936 "rw_mbytes_per_sec": 0, 00:37:55.936 "r_mbytes_per_sec": 0, 00:37:55.936 "w_mbytes_per_sec": 0 00:37:55.936 }, 00:37:55.936 "claimed": true, 00:37:55.936 "claim_type": "exclusive_write", 00:37:55.936 "zoned": false, 00:37:55.936 "supported_io_types": { 00:37:55.936 "read": true, 00:37:55.936 "write": true, 00:37:55.936 "unmap": true, 00:37:55.936 "flush": true, 00:37:55.936 "reset": true, 00:37:55.936 "nvme_admin": false, 00:37:55.936 "nvme_io": false, 00:37:55.936 "nvme_io_md": false, 00:37:55.936 "write_zeroes": true, 00:37:55.936 "zcopy": true, 00:37:55.936 "get_zone_info": false, 00:37:55.936 "zone_management": false, 00:37:55.936 "zone_append": false, 00:37:55.936 "compare": false, 00:37:55.936 "compare_and_write": false, 00:37:55.936 "abort": true, 00:37:55.936 "seek_hole": false, 00:37:55.936 "seek_data": false, 00:37:55.936 "copy": true, 00:37:55.936 "nvme_iov_md": false 00:37:55.936 }, 00:37:55.936 "memory_domains": [ 00:37:55.936 { 00:37:55.936 "dma_device_id": "system", 00:37:55.936 "dma_device_type": 1 00:37:55.936 }, 00:37:55.936 { 00:37:55.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:55.936 "dma_device_type": 2 00:37:55.936 } 00:37:55.936 ], 00:37:55.936 "driver_specific": { 00:37:55.936 "passthru": { 00:37:55.936 "name": "pt1", 00:37:55.936 "base_bdev_name": "malloc1" 00:37:55.936 } 00:37:55.936 } 00:37:55.936 }' 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:55.936 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:56.196 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:56.196 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:56.196 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:56.196 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:56.196 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:56.455 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:56.455 "name": "pt2", 00:37:56.455 "aliases": [ 00:37:56.455 "00000000-0000-0000-0000-000000000002" 00:37:56.455 ], 00:37:56.455 "product_name": "passthru", 00:37:56.455 "block_size": 4128, 00:37:56.455 "num_blocks": 8192, 00:37:56.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:56.455 "md_size": 32, 00:37:56.455 "md_interleave": true, 00:37:56.455 "dif_type": 0, 00:37:56.455 "assigned_rate_limits": { 00:37:56.455 "rw_ios_per_sec": 0, 00:37:56.455 "rw_mbytes_per_sec": 0, 00:37:56.455 "r_mbytes_per_sec": 0, 00:37:56.455 "w_mbytes_per_sec": 0 00:37:56.455 }, 00:37:56.455 "claimed": true, 00:37:56.455 "claim_type": "exclusive_write", 00:37:56.455 "zoned": false, 00:37:56.455 "supported_io_types": { 00:37:56.455 "read": true, 00:37:56.455 "write": true, 00:37:56.455 "unmap": true, 00:37:56.455 "flush": true, 00:37:56.455 "reset": true, 00:37:56.455 "nvme_admin": false, 00:37:56.455 "nvme_io": false, 00:37:56.455 "nvme_io_md": false, 00:37:56.455 "write_zeroes": true, 00:37:56.455 "zcopy": true, 00:37:56.455 "get_zone_info": false, 00:37:56.455 "zone_management": false, 00:37:56.455 "zone_append": false, 00:37:56.455 "compare": false, 00:37:56.455 "compare_and_write": false, 00:37:56.455 "abort": true, 00:37:56.455 "seek_hole": false, 00:37:56.455 "seek_data": false, 00:37:56.455 "copy": true, 00:37:56.455 "nvme_iov_md": false 00:37:56.455 }, 00:37:56.455 "memory_domains": [ 00:37:56.455 { 00:37:56.455 "dma_device_id": "system", 00:37:56.455 "dma_device_type": 1 00:37:56.455 }, 00:37:56.455 { 00:37:56.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:56.455 "dma_device_type": 2 00:37:56.455 } 00:37:56.455 ], 00:37:56.455 "driver_specific": { 00:37:56.455 "passthru": { 00:37:56.455 "name": "pt2", 00:37:56.455 "base_bdev_name": "malloc2" 00:37:56.455 } 00:37:56.455 } 00:37:56.455 }' 00:37:56.455 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:56.455 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:56.455 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:56.455 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:56.455 19:04:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:56.714 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:37:56.973 [2024-07-25 19:04:57.391585] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:56.973 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' e4c0e087-f32a-4553-9644-be93a0dbf648 '!=' e4c0e087-f32a-4553-9644-be93a0dbf648 ']' 00:37:56.973 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:37:56.973 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:56.973 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:37:56.973 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:57.231 [2024-07-25 19:04:57.671509] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:57.231 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:57.231 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:57.231 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:57.232 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:57.490 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:57.490 "name": "raid_bdev1", 00:37:57.490 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:57.490 "strip_size_kb": 0, 00:37:57.490 "state": "online", 00:37:57.490 "raid_level": "raid1", 00:37:57.490 "superblock": true, 00:37:57.490 "num_base_bdevs": 2, 00:37:57.490 "num_base_bdevs_discovered": 1, 00:37:57.490 "num_base_bdevs_operational": 1, 00:37:57.490 "base_bdevs_list": [ 00:37:57.490 { 00:37:57.490 "name": null, 00:37:57.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:57.490 "is_configured": false, 00:37:57.490 "data_offset": 256, 00:37:57.490 "data_size": 7936 00:37:57.490 }, 00:37:57.490 { 00:37:57.490 "name": "pt2", 00:37:57.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:57.490 "is_configured": true, 00:37:57.490 "data_offset": 256, 00:37:57.490 "data_size": 7936 00:37:57.490 } 00:37:57.490 ] 00:37:57.490 }' 00:37:57.490 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:57.490 19:04:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:58.058 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:58.318 [2024-07-25 19:04:58.647606] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:58.318 [2024-07-25 19:04:58.647768] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:58.318 [2024-07-25 19:04:58.647979] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:58.318 [2024-07-25 19:04:58.648131] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:58.318 [2024-07-25 19:04:58.648204] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013100 name raid_bdev1, state offline 00:37:58.318 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.318 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:37:58.576 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:37:58.577 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:37:58.577 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:37:58.577 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:37:58.577 19:04:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:58.836 [2024-07-25 19:04:59.331659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:58.836 [2024-07-25 19:04:59.331889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:58.836 [2024-07-25 19:04:59.331968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:58.836 [2024-07-25 19:04:59.332128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:58.836 [2024-07-25 19:04:59.334547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:58.836 [2024-07-25 19:04:59.334738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:58.836 [2024-07-25 19:04:59.334909] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:58.836 [2024-07-25 19:04:59.335036] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:58.836 [2024-07-25 19:04:59.335161] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013480 00:37:58.836 [2024-07-25 19:04:59.335370] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:58.836 [2024-07-25 19:04:59.335471] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:58.836 [2024-07-25 19:04:59.335588] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013480 00:37:58.836 [2024-07-25 19:04:59.335623] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013480 00:37:58.836 [2024-07-25 19:04:59.335788] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:58.836 pt2 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.836 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.096 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:59.096 "name": "raid_bdev1", 00:37:59.096 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:37:59.096 "strip_size_kb": 0, 00:37:59.096 "state": "online", 00:37:59.096 "raid_level": "raid1", 00:37:59.096 "superblock": true, 00:37:59.096 "num_base_bdevs": 2, 00:37:59.096 "num_base_bdevs_discovered": 1, 00:37:59.096 "num_base_bdevs_operational": 1, 00:37:59.096 "base_bdevs_list": [ 00:37:59.096 { 00:37:59.096 "name": null, 00:37:59.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.096 "is_configured": false, 00:37:59.096 "data_offset": 256, 00:37:59.096 "data_size": 7936 00:37:59.096 }, 00:37:59.096 { 00:37:59.096 "name": "pt2", 00:37:59.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:59.096 "is_configured": true, 00:37:59.096 "data_offset": 256, 00:37:59.096 "data_size": 7936 00:37:59.096 } 00:37:59.096 ] 00:37:59.096 }' 00:37:59.096 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:59.096 19:04:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:59.664 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:59.923 [2024-07-25 19:05:00.406153] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:59.923 [2024-07-25 19:05:00.406323] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:59.923 [2024-07-25 19:05:00.406559] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:59.923 [2024-07-25 19:05:00.406706] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:59.923 [2024-07-25 19:05:00.406783] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013480 name raid_bdev1, state offline 00:37:59.923 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:59.923 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:38:00.182 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:38:00.182 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:38:00.182 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:38:00.182 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:00.442 [2024-07-25 19:05:00.838179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:00.442 [2024-07-25 19:05:00.838399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:00.442 [2024-07-25 19:05:00.838474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:00.442 [2024-07-25 19:05:00.838567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:00.442 [2024-07-25 19:05:00.840935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:00.442 [2024-07-25 19:05:00.841103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:00.442 [2024-07-25 19:05:00.841257] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:00.442 [2024-07-25 19:05:00.841389] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:00.442 [2024-07-25 19:05:00.841539] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:00.442 [2024-07-25 19:05:00.841694] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:00.442 [2024-07-25 19:05:00.841739] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013800 name raid_bdev1, state configuring 00:38:00.442 [2024-07-25 19:05:00.841830] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:00.442 [2024-07-25 19:05:00.841999] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000013b80 00:38:00.442 [2024-07-25 19:05:00.842035] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:00.442 [2024-07-25 19:05:00.842122] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:00.442 [2024-07-25 19:05:00.842323] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000013b80 00:38:00.442 [2024-07-25 19:05:00.842360] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000013b80 00:38:00.442 [2024-07-25 19:05:00.842555] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:00.442 pt1 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:00.442 19:05:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.701 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:00.701 "name": "raid_bdev1", 00:38:00.701 "uuid": "e4c0e087-f32a-4553-9644-be93a0dbf648", 00:38:00.701 "strip_size_kb": 0, 00:38:00.701 "state": "online", 00:38:00.701 "raid_level": "raid1", 00:38:00.701 "superblock": true, 00:38:00.701 "num_base_bdevs": 2, 00:38:00.701 "num_base_bdevs_discovered": 1, 00:38:00.701 "num_base_bdevs_operational": 1, 00:38:00.701 "base_bdevs_list": [ 00:38:00.701 { 00:38:00.701 "name": null, 00:38:00.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.701 "is_configured": false, 00:38:00.701 "data_offset": 256, 00:38:00.701 "data_size": 7936 00:38:00.701 }, 00:38:00.701 { 00:38:00.701 "name": "pt2", 00:38:00.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:00.701 "is_configured": true, 00:38:00.701 "data_offset": 256, 00:38:00.701 "data_size": 7936 00:38:00.701 } 00:38:00.701 ] 00:38:00.701 }' 00:38:00.701 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:00.701 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:01.282 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:38:01.283 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:01.540 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:38:01.540 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:01.540 19:05:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:38:01.799 [2024-07-25 19:05:02.166506] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' e4c0e087-f32a-4553-9644-be93a0dbf648 '!=' e4c0e087-f32a-4553-9644-be93a0dbf648 ']' 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 161721 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 161721 ']' 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 161721 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 161721 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 161721' 00:38:01.799 killing process with pid 161721 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 161721 00:38:01.799 [2024-07-25 19:05:02.212001] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:01.799 19:05:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 161721 00:38:01.799 [2024-07-25 19:05:02.212261] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:01.799 [2024-07-25 19:05:02.212387] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:01.799 [2024-07-25 19:05:02.212469] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000013b80 name raid_bdev1, state offline 00:38:01.799 [2024-07-25 19:05:02.366093] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:03.230 19:05:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:38:03.230 00:38:03.230 real 0m15.321s 00:38:03.230 user 0m26.832s 00:38:03.230 sys 0m2.725s 00:38:03.230 19:05:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:03.230 19:05:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:03.230 ************************************ 00:38:03.230 END TEST raid_superblock_test_md_interleaved 00:38:03.230 ************************************ 00:38:03.230 19:05:03 bdev_raid -- bdev/bdev_raid.sh@994 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:38:03.230 19:05:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:38:03.230 19:05:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:03.230 19:05:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:03.230 ************************************ 00:38:03.230 START TEST raid_rebuild_test_sb_md_interleaved 00:38:03.230 ************************************ 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # echo BaseBdev1 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # echo BaseBdev2 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=162229 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 162229 /var/tmp/spdk-raid.sock 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 162229 ']' 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:03.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:03.230 19:05:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:03.230 [2024-07-25 19:05:03.568957] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:03.230 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:03.230 Zero copy mechanism will not be used. 00:38:03.230 [2024-07-25 19:05:03.569202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162229 ] 00:38:03.230 [2024-07-25 19:05:03.757682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.795 [2024-07-25 19:05:04.073488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.795 [2024-07-25 19:05:04.344120] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:04.053 19:05:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:04.053 19:05:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:38:04.053 19:05:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:38:04.053 19:05:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:38:04.311 BaseBdev1_malloc 00:38:04.311 19:05:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:04.570 [2024-07-25 19:05:04.986414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:04.570 [2024-07-25 19:05:04.986558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:04.570 [2024-07-25 19:05:04.986609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:38:04.570 [2024-07-25 19:05:04.986634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:04.570 [2024-07-25 19:05:04.988935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:04.570 [2024-07-25 19:05:04.988992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:04.570 BaseBdev1 00:38:04.570 19:05:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:38:04.570 19:05:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:38:04.828 BaseBdev2_malloc 00:38:04.828 19:05:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:05.086 [2024-07-25 19:05:05.472091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:05.087 [2024-07-25 19:05:05.472222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:05.087 [2024-07-25 19:05:05.472270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:38:05.087 [2024-07-25 19:05:05.472293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:05.087 [2024-07-25 19:05:05.474628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:05.087 [2024-07-25 19:05:05.474675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:05.087 BaseBdev2 00:38:05.087 19:05:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:38:05.345 spare_malloc 00:38:05.345 19:05:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:05.603 spare_delay 00:38:05.603 19:05:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:05.603 [2024-07-25 19:05:06.119152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:05.603 [2024-07-25 19:05:06.119249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:05.603 [2024-07-25 19:05:06.119288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:05.603 [2024-07-25 19:05:06.119315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:05.603 [2024-07-25 19:05:06.121577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:05.603 [2024-07-25 19:05:06.121629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:05.603 spare 00:38:05.603 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:38:05.861 [2024-07-25 19:05:06.291203] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:05.861 [2024-07-25 19:05:06.293335] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:05.861 [2024-07-25 19:05:06.293549] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012a00 00:38:05.861 [2024-07-25 19:05:06.293560] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:05.861 [2024-07-25 19:05:06.293655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:05.861 [2024-07-25 19:05:06.293714] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012a00 00:38:05.861 [2024-07-25 19:05:06.293721] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012a00 00:38:05.861 [2024-07-25 19:05:06.293799] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:05.861 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:05.861 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:05.861 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:05.861 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:05.861 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:05.861 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:05.861 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:05.862 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:05.862 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:05.862 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:05.862 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.862 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.119 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:06.119 "name": "raid_bdev1", 00:38:06.119 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:06.119 "strip_size_kb": 0, 00:38:06.119 "state": "online", 00:38:06.119 "raid_level": "raid1", 00:38:06.119 "superblock": true, 00:38:06.119 "num_base_bdevs": 2, 00:38:06.119 "num_base_bdevs_discovered": 2, 00:38:06.119 "num_base_bdevs_operational": 2, 00:38:06.119 "base_bdevs_list": [ 00:38:06.119 { 00:38:06.119 "name": "BaseBdev1", 00:38:06.119 "uuid": "52554443-583e-5a6a-b768-ab7a8f1166b5", 00:38:06.119 "is_configured": true, 00:38:06.119 "data_offset": 256, 00:38:06.119 "data_size": 7936 00:38:06.119 }, 00:38:06.119 { 00:38:06.119 "name": "BaseBdev2", 00:38:06.119 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:06.119 "is_configured": true, 00:38:06.119 "data_offset": 256, 00:38:06.119 "data_size": 7936 00:38:06.119 } 00:38:06.119 ] 00:38:06.119 }' 00:38:06.119 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:06.119 19:05:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:06.685 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:06.685 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:38:06.943 [2024-07-25 19:05:07.271527] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:06.943 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:38:06.943 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:06.943 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:07.201 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:38:07.201 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:38:07.201 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:38:07.201 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:07.460 [2024-07-25 19:05:07.787396] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:07.460 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:07.461 19:05:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:07.720 19:05:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:07.720 "name": "raid_bdev1", 00:38:07.720 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:07.720 "strip_size_kb": 0, 00:38:07.720 "state": "online", 00:38:07.720 "raid_level": "raid1", 00:38:07.720 "superblock": true, 00:38:07.720 "num_base_bdevs": 2, 00:38:07.720 "num_base_bdevs_discovered": 1, 00:38:07.720 "num_base_bdevs_operational": 1, 00:38:07.720 "base_bdevs_list": [ 00:38:07.720 { 00:38:07.720 "name": null, 00:38:07.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:07.720 "is_configured": false, 00:38:07.720 "data_offset": 256, 00:38:07.720 "data_size": 7936 00:38:07.720 }, 00:38:07.720 { 00:38:07.720 "name": "BaseBdev2", 00:38:07.720 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:07.720 "is_configured": true, 00:38:07.720 "data_offset": 256, 00:38:07.720 "data_size": 7936 00:38:07.720 } 00:38:07.720 ] 00:38:07.720 }' 00:38:07.720 19:05:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:07.720 19:05:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:08.288 19:05:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:08.288 [2024-07-25 19:05:08.763587] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:08.288 [2024-07-25 19:05:08.783803] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:38:08.288 [2024-07-25 19:05:08.785976] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:08.288 19:05:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:09.666 19:05:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:09.666 19:05:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:09.666 19:05:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:09.666 19:05:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:09.666 19:05:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:09.666 19:05:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:09.666 19:05:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:09.666 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:09.666 "name": "raid_bdev1", 00:38:09.666 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:09.666 "strip_size_kb": 0, 00:38:09.666 "state": "online", 00:38:09.666 "raid_level": "raid1", 00:38:09.666 "superblock": true, 00:38:09.666 "num_base_bdevs": 2, 00:38:09.666 "num_base_bdevs_discovered": 2, 00:38:09.666 "num_base_bdevs_operational": 2, 00:38:09.666 "process": { 00:38:09.666 "type": "rebuild", 00:38:09.666 "target": "spare", 00:38:09.666 "progress": { 00:38:09.666 "blocks": 3072, 00:38:09.666 "percent": 38 00:38:09.666 } 00:38:09.666 }, 00:38:09.666 "base_bdevs_list": [ 00:38:09.666 { 00:38:09.666 "name": "spare", 00:38:09.666 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:09.666 "is_configured": true, 00:38:09.666 "data_offset": 256, 00:38:09.666 "data_size": 7936 00:38:09.666 }, 00:38:09.666 { 00:38:09.666 "name": "BaseBdev2", 00:38:09.666 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:09.666 "is_configured": true, 00:38:09.666 "data_offset": 256, 00:38:09.666 "data_size": 7936 00:38:09.666 } 00:38:09.666 ] 00:38:09.666 }' 00:38:09.666 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:09.666 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:09.666 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:09.666 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:09.666 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:09.925 [2024-07-25 19:05:10.280028] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:09.925 [2024-07-25 19:05:10.296632] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:09.925 [2024-07-25 19:05:10.296708] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:09.925 [2024-07-25 19:05:10.296722] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:09.925 [2024-07-25 19:05:10.296730] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:09.925 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:10.184 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:10.184 "name": "raid_bdev1", 00:38:10.184 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:10.184 "strip_size_kb": 0, 00:38:10.184 "state": "online", 00:38:10.184 "raid_level": "raid1", 00:38:10.184 "superblock": true, 00:38:10.184 "num_base_bdevs": 2, 00:38:10.184 "num_base_bdevs_discovered": 1, 00:38:10.184 "num_base_bdevs_operational": 1, 00:38:10.184 "base_bdevs_list": [ 00:38:10.184 { 00:38:10.184 "name": null, 00:38:10.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:10.184 "is_configured": false, 00:38:10.184 "data_offset": 256, 00:38:10.184 "data_size": 7936 00:38:10.184 }, 00:38:10.184 { 00:38:10.184 "name": "BaseBdev2", 00:38:10.184 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:10.184 "is_configured": true, 00:38:10.184 "data_offset": 256, 00:38:10.184 "data_size": 7936 00:38:10.184 } 00:38:10.184 ] 00:38:10.184 }' 00:38:10.184 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:10.184 19:05:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:10.752 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:10.752 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:10.752 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:10.752 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:10.752 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:10.752 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:10.752 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:11.011 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:11.011 "name": "raid_bdev1", 00:38:11.011 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:11.011 "strip_size_kb": 0, 00:38:11.011 "state": "online", 00:38:11.011 "raid_level": "raid1", 00:38:11.011 "superblock": true, 00:38:11.011 "num_base_bdevs": 2, 00:38:11.011 "num_base_bdevs_discovered": 1, 00:38:11.011 "num_base_bdevs_operational": 1, 00:38:11.011 "base_bdevs_list": [ 00:38:11.011 { 00:38:11.011 "name": null, 00:38:11.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:11.011 "is_configured": false, 00:38:11.011 "data_offset": 256, 00:38:11.011 "data_size": 7936 00:38:11.011 }, 00:38:11.011 { 00:38:11.011 "name": "BaseBdev2", 00:38:11.011 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:11.011 "is_configured": true, 00:38:11.011 "data_offset": 256, 00:38:11.011 "data_size": 7936 00:38:11.011 } 00:38:11.011 ] 00:38:11.011 }' 00:38:11.011 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:11.011 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:11.011 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:11.011 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:11.011 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:11.270 [2024-07-25 19:05:11.647110] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:11.270 [2024-07-25 19:05:11.665844] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:11.270 [2024-07-25 19:05:11.667977] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:11.270 19:05:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:38:12.208 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:12.208 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:12.208 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:12.208 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:12.208 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:12.208 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:12.208 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.468 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:12.468 "name": "raid_bdev1", 00:38:12.468 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:12.468 "strip_size_kb": 0, 00:38:12.468 "state": "online", 00:38:12.468 "raid_level": "raid1", 00:38:12.468 "superblock": true, 00:38:12.468 "num_base_bdevs": 2, 00:38:12.468 "num_base_bdevs_discovered": 2, 00:38:12.468 "num_base_bdevs_operational": 2, 00:38:12.468 "process": { 00:38:12.468 "type": "rebuild", 00:38:12.468 "target": "spare", 00:38:12.468 "progress": { 00:38:12.468 "blocks": 3072, 00:38:12.468 "percent": 38 00:38:12.468 } 00:38:12.468 }, 00:38:12.468 "base_bdevs_list": [ 00:38:12.468 { 00:38:12.468 "name": "spare", 00:38:12.468 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:12.468 "is_configured": true, 00:38:12.468 "data_offset": 256, 00:38:12.468 "data_size": 7936 00:38:12.468 }, 00:38:12.468 { 00:38:12.468 "name": "BaseBdev2", 00:38:12.468 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:12.468 "is_configured": true, 00:38:12.468 "data_offset": 256, 00:38:12.468 "data_size": 7936 00:38:12.468 } 00:38:12.468 ] 00:38:12.468 }' 00:38:12.468 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:12.468 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:12.468 19:05:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:38:12.468 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1450 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:12.468 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:12.469 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:12.469 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:12.469 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.728 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:12.728 "name": "raid_bdev1", 00:38:12.728 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:12.728 "strip_size_kb": 0, 00:38:12.728 "state": "online", 00:38:12.728 "raid_level": "raid1", 00:38:12.728 "superblock": true, 00:38:12.728 "num_base_bdevs": 2, 00:38:12.728 "num_base_bdevs_discovered": 2, 00:38:12.728 "num_base_bdevs_operational": 2, 00:38:12.728 "process": { 00:38:12.728 "type": "rebuild", 00:38:12.728 "target": "spare", 00:38:12.728 "progress": { 00:38:12.728 "blocks": 3840, 00:38:12.728 "percent": 48 00:38:12.728 } 00:38:12.728 }, 00:38:12.728 "base_bdevs_list": [ 00:38:12.728 { 00:38:12.728 "name": "spare", 00:38:12.728 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:12.728 "is_configured": true, 00:38:12.728 "data_offset": 256, 00:38:12.728 "data_size": 7936 00:38:12.728 }, 00:38:12.728 { 00:38:12.728 "name": "BaseBdev2", 00:38:12.728 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:12.728 "is_configured": true, 00:38:12.728 "data_offset": 256, 00:38:12.729 "data_size": 7936 00:38:12.729 } 00:38:12.729 ] 00:38:12.729 }' 00:38:12.729 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:12.729 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:12.995 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:12.995 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:12.995 19:05:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:13.937 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.196 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:14.196 "name": "raid_bdev1", 00:38:14.196 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:14.196 "strip_size_kb": 0, 00:38:14.196 "state": "online", 00:38:14.196 "raid_level": "raid1", 00:38:14.196 "superblock": true, 00:38:14.196 "num_base_bdevs": 2, 00:38:14.196 "num_base_bdevs_discovered": 2, 00:38:14.196 "num_base_bdevs_operational": 2, 00:38:14.196 "process": { 00:38:14.196 "type": "rebuild", 00:38:14.196 "target": "spare", 00:38:14.196 "progress": { 00:38:14.196 "blocks": 7168, 00:38:14.196 "percent": 90 00:38:14.196 } 00:38:14.196 }, 00:38:14.196 "base_bdevs_list": [ 00:38:14.196 { 00:38:14.196 "name": "spare", 00:38:14.196 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:14.196 "is_configured": true, 00:38:14.196 "data_offset": 256, 00:38:14.196 "data_size": 7936 00:38:14.196 }, 00:38:14.196 { 00:38:14.196 "name": "BaseBdev2", 00:38:14.196 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:14.196 "is_configured": true, 00:38:14.196 "data_offset": 256, 00:38:14.196 "data_size": 7936 00:38:14.196 } 00:38:14.196 ] 00:38:14.196 }' 00:38:14.196 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:14.196 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:14.196 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:14.196 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:14.196 19:05:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:38:14.456 [2024-07-25 19:05:14.791154] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:14.456 [2024-07-25 19:05:14.791223] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:14.456 [2024-07-25 19:05:14.791357] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:15.394 "name": "raid_bdev1", 00:38:15.394 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:15.394 "strip_size_kb": 0, 00:38:15.394 "state": "online", 00:38:15.394 "raid_level": "raid1", 00:38:15.394 "superblock": true, 00:38:15.394 "num_base_bdevs": 2, 00:38:15.394 "num_base_bdevs_discovered": 2, 00:38:15.394 "num_base_bdevs_operational": 2, 00:38:15.394 "base_bdevs_list": [ 00:38:15.394 { 00:38:15.394 "name": "spare", 00:38:15.394 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:15.394 "is_configured": true, 00:38:15.394 "data_offset": 256, 00:38:15.394 "data_size": 7936 00:38:15.394 }, 00:38:15.394 { 00:38:15.394 "name": "BaseBdev2", 00:38:15.394 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:15.394 "is_configured": true, 00:38:15.394 "data_offset": 256, 00:38:15.394 "data_size": 7936 00:38:15.394 } 00:38:15.394 ] 00:38:15.394 }' 00:38:15.394 19:05:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:15.653 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:15.912 "name": "raid_bdev1", 00:38:15.912 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:15.912 "strip_size_kb": 0, 00:38:15.912 "state": "online", 00:38:15.912 "raid_level": "raid1", 00:38:15.912 "superblock": true, 00:38:15.912 "num_base_bdevs": 2, 00:38:15.912 "num_base_bdevs_discovered": 2, 00:38:15.912 "num_base_bdevs_operational": 2, 00:38:15.912 "base_bdevs_list": [ 00:38:15.912 { 00:38:15.912 "name": "spare", 00:38:15.912 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:15.912 "is_configured": true, 00:38:15.912 "data_offset": 256, 00:38:15.912 "data_size": 7936 00:38:15.912 }, 00:38:15.912 { 00:38:15.912 "name": "BaseBdev2", 00:38:15.912 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:15.912 "is_configured": true, 00:38:15.912 "data_offset": 256, 00:38:15.912 "data_size": 7936 00:38:15.912 } 00:38:15.912 ] 00:38:15.912 }' 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:15.912 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:15.913 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.172 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:16.172 "name": "raid_bdev1", 00:38:16.172 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:16.172 "strip_size_kb": 0, 00:38:16.172 "state": "online", 00:38:16.172 "raid_level": "raid1", 00:38:16.172 "superblock": true, 00:38:16.172 "num_base_bdevs": 2, 00:38:16.172 "num_base_bdevs_discovered": 2, 00:38:16.172 "num_base_bdevs_operational": 2, 00:38:16.172 "base_bdevs_list": [ 00:38:16.172 { 00:38:16.172 "name": "spare", 00:38:16.172 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:16.172 "is_configured": true, 00:38:16.172 "data_offset": 256, 00:38:16.172 "data_size": 7936 00:38:16.172 }, 00:38:16.172 { 00:38:16.172 "name": "BaseBdev2", 00:38:16.172 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:16.172 "is_configured": true, 00:38:16.172 "data_offset": 256, 00:38:16.172 "data_size": 7936 00:38:16.172 } 00:38:16.172 ] 00:38:16.172 }' 00:38:16.172 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:16.172 19:05:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:16.742 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:17.001 [2024-07-25 19:05:17.367178] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:17.001 [2024-07-25 19:05:17.367212] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:17.001 [2024-07-25 19:05:17.367297] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:17.001 [2024-07-25 19:05:17.367370] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:17.001 [2024-07-25 19:05:17.367379] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012a00 name raid_bdev1, state offline 00:38:17.001 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:17.001 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:38:17.261 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:38:17.261 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:38:17.261 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:38:17.261 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:17.261 19:05:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:17.521 [2024-07-25 19:05:18.059221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:17.521 [2024-07-25 19:05:18.059288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:17.521 [2024-07-25 19:05:18.059334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:38:17.521 [2024-07-25 19:05:18.059360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:17.521 [2024-07-25 19:05:18.061607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:17.521 [2024-07-25 19:05:18.061654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:17.521 [2024-07-25 19:05:18.061728] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:17.521 [2024-07-25 19:05:18.061799] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:17.521 [2024-07-25 19:05:18.061922] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:17.521 spare 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:17.521 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:17.781 [2024-07-25 19:05:18.161995] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000012d80 00:38:17.781 [2024-07-25 19:05:18.162012] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:17.781 [2024-07-25 19:05:18.162146] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:17.781 [2024-07-25 19:05:18.162246] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000012d80 00:38:17.781 [2024-07-25 19:05:18.162259] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000012d80 00:38:17.781 [2024-07-25 19:05:18.162327] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:17.781 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:17.781 "name": "raid_bdev1", 00:38:17.781 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:17.781 "strip_size_kb": 0, 00:38:17.781 "state": "online", 00:38:17.781 "raid_level": "raid1", 00:38:17.781 "superblock": true, 00:38:17.781 "num_base_bdevs": 2, 00:38:17.781 "num_base_bdevs_discovered": 2, 00:38:17.781 "num_base_bdevs_operational": 2, 00:38:17.781 "base_bdevs_list": [ 00:38:17.781 { 00:38:17.781 "name": "spare", 00:38:17.781 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:17.781 "is_configured": true, 00:38:17.781 "data_offset": 256, 00:38:17.781 "data_size": 7936 00:38:17.781 }, 00:38:17.781 { 00:38:17.781 "name": "BaseBdev2", 00:38:17.781 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:17.781 "is_configured": true, 00:38:17.781 "data_offset": 256, 00:38:17.781 "data_size": 7936 00:38:17.781 } 00:38:17.781 ] 00:38:17.781 }' 00:38:17.781 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:17.781 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:18.351 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:18.351 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:18.351 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:18.351 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:18.351 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:18.351 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:18.351 19:05:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.610 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:18.610 "name": "raid_bdev1", 00:38:18.610 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:18.610 "strip_size_kb": 0, 00:38:18.610 "state": "online", 00:38:18.610 "raid_level": "raid1", 00:38:18.610 "superblock": true, 00:38:18.610 "num_base_bdevs": 2, 00:38:18.610 "num_base_bdevs_discovered": 2, 00:38:18.610 "num_base_bdevs_operational": 2, 00:38:18.610 "base_bdevs_list": [ 00:38:18.610 { 00:38:18.610 "name": "spare", 00:38:18.610 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:18.610 "is_configured": true, 00:38:18.610 "data_offset": 256, 00:38:18.610 "data_size": 7936 00:38:18.610 }, 00:38:18.611 { 00:38:18.611 "name": "BaseBdev2", 00:38:18.611 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:18.611 "is_configured": true, 00:38:18.611 "data_offset": 256, 00:38:18.611 "data_size": 7936 00:38:18.611 } 00:38:18.611 ] 00:38:18.611 }' 00:38:18.611 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:18.611 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:18.611 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:18.611 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:18.870 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:18.870 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:18.870 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:38:18.870 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:19.130 [2024-07-25 19:05:19.600575] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:19.130 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:19.389 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:19.389 "name": "raid_bdev1", 00:38:19.389 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:19.390 "strip_size_kb": 0, 00:38:19.390 "state": "online", 00:38:19.390 "raid_level": "raid1", 00:38:19.390 "superblock": true, 00:38:19.390 "num_base_bdevs": 2, 00:38:19.390 "num_base_bdevs_discovered": 1, 00:38:19.390 "num_base_bdevs_operational": 1, 00:38:19.390 "base_bdevs_list": [ 00:38:19.390 { 00:38:19.390 "name": null, 00:38:19.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:19.390 "is_configured": false, 00:38:19.390 "data_offset": 256, 00:38:19.390 "data_size": 7936 00:38:19.390 }, 00:38:19.390 { 00:38:19.390 "name": "BaseBdev2", 00:38:19.390 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:19.390 "is_configured": true, 00:38:19.390 "data_offset": 256, 00:38:19.390 "data_size": 7936 00:38:19.390 } 00:38:19.390 ] 00:38:19.390 }' 00:38:19.390 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:19.390 19:05:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:19.958 19:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:20.218 [2024-07-25 19:05:20.596777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:20.218 [2024-07-25 19:05:20.597002] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:20.218 [2024-07-25 19:05:20.597016] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:20.218 [2024-07-25 19:05:20.597095] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:20.218 [2024-07-25 19:05:20.615848] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:38:20.218 [2024-07-25 19:05:20.618114] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:20.218 19:05:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:38:21.155 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:21.155 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:21.155 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:21.155 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:21.155 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:21.155 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:21.155 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.414 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:21.414 "name": "raid_bdev1", 00:38:21.414 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:21.414 "strip_size_kb": 0, 00:38:21.414 "state": "online", 00:38:21.414 "raid_level": "raid1", 00:38:21.414 "superblock": true, 00:38:21.414 "num_base_bdevs": 2, 00:38:21.414 "num_base_bdevs_discovered": 2, 00:38:21.414 "num_base_bdevs_operational": 2, 00:38:21.414 "process": { 00:38:21.414 "type": "rebuild", 00:38:21.414 "target": "spare", 00:38:21.414 "progress": { 00:38:21.414 "blocks": 3072, 00:38:21.414 "percent": 38 00:38:21.414 } 00:38:21.414 }, 00:38:21.414 "base_bdevs_list": [ 00:38:21.414 { 00:38:21.414 "name": "spare", 00:38:21.414 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:21.414 "is_configured": true, 00:38:21.414 "data_offset": 256, 00:38:21.414 "data_size": 7936 00:38:21.414 }, 00:38:21.414 { 00:38:21.414 "name": "BaseBdev2", 00:38:21.414 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:21.414 "is_configured": true, 00:38:21.414 "data_offset": 256, 00:38:21.414 "data_size": 7936 00:38:21.414 } 00:38:21.414 ] 00:38:21.414 }' 00:38:21.414 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:21.414 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:21.414 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:21.414 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:21.414 19:05:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:21.674 [2024-07-25 19:05:22.187328] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:21.674 [2024-07-25 19:05:22.230040] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:21.674 [2024-07-25 19:05:22.230113] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:21.674 [2024-07-25 19:05:22.230128] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:21.674 [2024-07-25 19:05:22.230136] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:21.933 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.193 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:22.193 "name": "raid_bdev1", 00:38:22.193 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:22.193 "strip_size_kb": 0, 00:38:22.193 "state": "online", 00:38:22.193 "raid_level": "raid1", 00:38:22.193 "superblock": true, 00:38:22.193 "num_base_bdevs": 2, 00:38:22.193 "num_base_bdevs_discovered": 1, 00:38:22.193 "num_base_bdevs_operational": 1, 00:38:22.193 "base_bdevs_list": [ 00:38:22.193 { 00:38:22.193 "name": null, 00:38:22.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:22.193 "is_configured": false, 00:38:22.193 "data_offset": 256, 00:38:22.193 "data_size": 7936 00:38:22.193 }, 00:38:22.193 { 00:38:22.193 "name": "BaseBdev2", 00:38:22.193 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:22.193 "is_configured": true, 00:38:22.193 "data_offset": 256, 00:38:22.193 "data_size": 7936 00:38:22.193 } 00:38:22.193 ] 00:38:22.193 }' 00:38:22.193 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:22.193 19:05:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:22.762 19:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:23.020 [2024-07-25 19:05:23.368817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:23.021 [2024-07-25 19:05:23.368909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:23.021 [2024-07-25 19:05:23.368944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:23.021 [2024-07-25 19:05:23.368972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:23.021 [2024-07-25 19:05:23.369235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:23.021 [2024-07-25 19:05:23.369266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:23.021 [2024-07-25 19:05:23.369334] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:23.021 [2024-07-25 19:05:23.369346] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:23.021 [2024-07-25 19:05:23.369356] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:23.021 [2024-07-25 19:05:23.369398] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:23.021 [2024-07-25 19:05:23.386940] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:38:23.021 spare 00:38:23.021 [2024-07-25 19:05:23.389200] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:23.021 19:05:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:38:23.955 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:23.955 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:23.955 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:23.955 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:23.955 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:23.955 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:23.955 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.213 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:24.213 "name": "raid_bdev1", 00:38:24.213 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:24.213 "strip_size_kb": 0, 00:38:24.213 "state": "online", 00:38:24.213 "raid_level": "raid1", 00:38:24.213 "superblock": true, 00:38:24.213 "num_base_bdevs": 2, 00:38:24.213 "num_base_bdevs_discovered": 2, 00:38:24.213 "num_base_bdevs_operational": 2, 00:38:24.213 "process": { 00:38:24.213 "type": "rebuild", 00:38:24.213 "target": "spare", 00:38:24.213 "progress": { 00:38:24.213 "blocks": 3072, 00:38:24.213 "percent": 38 00:38:24.213 } 00:38:24.213 }, 00:38:24.213 "base_bdevs_list": [ 00:38:24.213 { 00:38:24.213 "name": "spare", 00:38:24.213 "uuid": "52230c75-b1a3-56b4-be2a-72eb7b350480", 00:38:24.213 "is_configured": true, 00:38:24.213 "data_offset": 256, 00:38:24.213 "data_size": 7936 00:38:24.213 }, 00:38:24.213 { 00:38:24.213 "name": "BaseBdev2", 00:38:24.213 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:24.213 "is_configured": true, 00:38:24.213 "data_offset": 256, 00:38:24.213 "data_size": 7936 00:38:24.213 } 00:38:24.213 ] 00:38:24.213 }' 00:38:24.213 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:24.213 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:24.213 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:24.213 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:24.213 19:05:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:24.471 [2024-07-25 19:05:24.974303] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:24.471 [2024-07-25 19:05:25.000870] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:24.471 [2024-07-25 19:05:25.000984] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:24.471 [2024-07-25 19:05:25.000999] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:24.471 [2024-07-25 19:05:25.001006] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:24.471 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:24.471 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:24.471 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:24.471 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:24.471 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:24.471 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:24.471 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:24.472 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:24.472 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:24.472 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:24.729 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:24.729 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.729 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:24.729 "name": "raid_bdev1", 00:38:24.729 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:24.729 "strip_size_kb": 0, 00:38:24.729 "state": "online", 00:38:24.729 "raid_level": "raid1", 00:38:24.729 "superblock": true, 00:38:24.729 "num_base_bdevs": 2, 00:38:24.729 "num_base_bdevs_discovered": 1, 00:38:24.729 "num_base_bdevs_operational": 1, 00:38:24.729 "base_bdevs_list": [ 00:38:24.729 { 00:38:24.729 "name": null, 00:38:24.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:24.729 "is_configured": false, 00:38:24.729 "data_offset": 256, 00:38:24.729 "data_size": 7936 00:38:24.729 }, 00:38:24.729 { 00:38:24.729 "name": "BaseBdev2", 00:38:24.729 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:24.729 "is_configured": true, 00:38:24.729 "data_offset": 256, 00:38:24.729 "data_size": 7936 00:38:24.729 } 00:38:24.729 ] 00:38:24.729 }' 00:38:24.729 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:24.729 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:25.331 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:25.331 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:25.331 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:25.331 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:25.331 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:25.331 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:25.331 19:05:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:25.612 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:25.612 "name": "raid_bdev1", 00:38:25.612 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:25.612 "strip_size_kb": 0, 00:38:25.612 "state": "online", 00:38:25.612 "raid_level": "raid1", 00:38:25.612 "superblock": true, 00:38:25.612 "num_base_bdevs": 2, 00:38:25.612 "num_base_bdevs_discovered": 1, 00:38:25.612 "num_base_bdevs_operational": 1, 00:38:25.612 "base_bdevs_list": [ 00:38:25.612 { 00:38:25.612 "name": null, 00:38:25.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:25.612 "is_configured": false, 00:38:25.612 "data_offset": 256, 00:38:25.612 "data_size": 7936 00:38:25.612 }, 00:38:25.612 { 00:38:25.612 "name": "BaseBdev2", 00:38:25.612 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:25.612 "is_configured": true, 00:38:25.612 "data_offset": 256, 00:38:25.612 "data_size": 7936 00:38:25.612 } 00:38:25.612 ] 00:38:25.612 }' 00:38:25.612 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:25.871 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:25.871 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:25.871 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:25.871 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:38:25.871 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:26.130 [2024-07-25 19:05:26.600084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:26.130 [2024-07-25 19:05:26.600168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:26.130 [2024-07-25 19:05:26.600209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:26.130 [2024-07-25 19:05:26.600236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:26.130 [2024-07-25 19:05:26.600448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:26.130 [2024-07-25 19:05:26.600471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:26.130 [2024-07-25 19:05:26.600557] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:26.130 [2024-07-25 19:05:26.600575] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:26.130 [2024-07-25 19:05:26.600583] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:26.130 BaseBdev1 00:38:26.130 19:05:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:38:27.065 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.066 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.632 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:27.632 "name": "raid_bdev1", 00:38:27.632 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:27.632 "strip_size_kb": 0, 00:38:27.632 "state": "online", 00:38:27.632 "raid_level": "raid1", 00:38:27.632 "superblock": true, 00:38:27.632 "num_base_bdevs": 2, 00:38:27.632 "num_base_bdevs_discovered": 1, 00:38:27.632 "num_base_bdevs_operational": 1, 00:38:27.632 "base_bdevs_list": [ 00:38:27.632 { 00:38:27.632 "name": null, 00:38:27.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:27.632 "is_configured": false, 00:38:27.632 "data_offset": 256, 00:38:27.632 "data_size": 7936 00:38:27.632 }, 00:38:27.632 { 00:38:27.632 "name": "BaseBdev2", 00:38:27.632 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:27.632 "is_configured": true, 00:38:27.632 "data_offset": 256, 00:38:27.632 "data_size": 7936 00:38:27.632 } 00:38:27.632 ] 00:38:27.632 }' 00:38:27.632 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:27.632 19:05:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:28.199 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:28.199 "name": "raid_bdev1", 00:38:28.199 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:28.199 "strip_size_kb": 0, 00:38:28.199 "state": "online", 00:38:28.199 "raid_level": "raid1", 00:38:28.199 "superblock": true, 00:38:28.199 "num_base_bdevs": 2, 00:38:28.199 "num_base_bdevs_discovered": 1, 00:38:28.199 "num_base_bdevs_operational": 1, 00:38:28.199 "base_bdevs_list": [ 00:38:28.199 { 00:38:28.199 "name": null, 00:38:28.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:28.199 "is_configured": false, 00:38:28.199 "data_offset": 256, 00:38:28.199 "data_size": 7936 00:38:28.199 }, 00:38:28.199 { 00:38:28.199 "name": "BaseBdev2", 00:38:28.199 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:28.199 "is_configured": true, 00:38:28.199 "data_offset": 256, 00:38:28.199 "data_size": 7936 00:38:28.199 } 00:38:28.199 ] 00:38:28.199 }' 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:28.458 19:05:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:28.458 [2024-07-25 19:05:29.030466] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:28.458 [2024-07-25 19:05:29.030644] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:28.458 [2024-07-25 19:05:29.030655] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:28.458 request: 00:38:28.458 { 00:38:28.458 "base_bdev": "BaseBdev1", 00:38:28.458 "raid_bdev": "raid_bdev1", 00:38:28.458 "method": "bdev_raid_add_base_bdev", 00:38:28.458 "req_id": 1 00:38:28.458 } 00:38:28.458 Got JSON-RPC error response 00:38:28.458 response: 00:38:28.458 { 00:38:28.458 "code": -22, 00:38:28.458 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:28.458 } 00:38:28.716 19:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:38:28.716 19:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:28.716 19:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:28.716 19:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:28.716 19:05:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.652 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.911 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:29.911 "name": "raid_bdev1", 00:38:29.911 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:29.911 "strip_size_kb": 0, 00:38:29.911 "state": "online", 00:38:29.911 "raid_level": "raid1", 00:38:29.911 "superblock": true, 00:38:29.911 "num_base_bdevs": 2, 00:38:29.911 "num_base_bdevs_discovered": 1, 00:38:29.911 "num_base_bdevs_operational": 1, 00:38:29.911 "base_bdevs_list": [ 00:38:29.911 { 00:38:29.911 "name": null, 00:38:29.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:29.911 "is_configured": false, 00:38:29.911 "data_offset": 256, 00:38:29.911 "data_size": 7936 00:38:29.911 }, 00:38:29.911 { 00:38:29.911 "name": "BaseBdev2", 00:38:29.911 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:29.911 "is_configured": true, 00:38:29.911 "data_offset": 256, 00:38:29.911 "data_size": 7936 00:38:29.911 } 00:38:29.911 ] 00:38:29.911 }' 00:38:29.911 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:29.911 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:30.478 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:30.478 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:30.478 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:30.478 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:30.478 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:30.478 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.478 19:05:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:30.737 "name": "raid_bdev1", 00:38:30.737 "uuid": "b1e90d77-7048-40a5-bc5f-cf6cfdb2da3d", 00:38:30.737 "strip_size_kb": 0, 00:38:30.737 "state": "online", 00:38:30.737 "raid_level": "raid1", 00:38:30.737 "superblock": true, 00:38:30.737 "num_base_bdevs": 2, 00:38:30.737 "num_base_bdevs_discovered": 1, 00:38:30.737 "num_base_bdevs_operational": 1, 00:38:30.737 "base_bdevs_list": [ 00:38:30.737 { 00:38:30.737 "name": null, 00:38:30.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:30.737 "is_configured": false, 00:38:30.737 "data_offset": 256, 00:38:30.737 "data_size": 7936 00:38:30.737 }, 00:38:30.737 { 00:38:30.737 "name": "BaseBdev2", 00:38:30.737 "uuid": "cd666be7-5ece-5d91-9ac6-a290772dc993", 00:38:30.737 "is_configured": true, 00:38:30.737 "data_offset": 256, 00:38:30.737 "data_size": 7936 00:38:30.737 } 00:38:30.737 ] 00:38:30.737 }' 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 162229 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 162229 ']' 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 162229 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162229 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162229' 00:38:30.737 killing process with pid 162229 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 162229 00:38:30.737 Received shutdown signal, test time was about 60.000000 seconds 00:38:30.737 00:38:30.737 Latency(us) 00:38:30.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.737 =================================================================================================================== 00:38:30.737 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:30.737 19:05:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 162229 00:38:30.737 [2024-07-25 19:05:31.302140] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:30.737 [2024-07-25 19:05:31.302277] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:30.737 [2024-07-25 19:05:31.302330] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:30.737 [2024-07-25 19:05:31.302343] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000012d80 name raid_bdev1, state offline 00:38:31.304 [2024-07-25 19:05:31.619030] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:32.682 ************************************ 00:38:32.682 END TEST raid_rebuild_test_sb_md_interleaved 00:38:32.682 ************************************ 00:38:32.682 19:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:38:32.682 00:38:32.682 real 0m29.588s 00:38:32.682 user 0m45.894s 00:38:32.682 sys 0m3.604s 00:38:32.682 19:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:32.682 19:05:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:32.682 19:05:33 bdev_raid -- bdev/bdev_raid.sh@996 -- # trap - EXIT 00:38:32.682 19:05:33 bdev_raid -- bdev/bdev_raid.sh@997 -- # cleanup 00:38:32.682 19:05:33 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 162229 ']' 00:38:32.682 19:05:33 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 162229 00:38:32.682 19:05:33 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:38:32.682 00:38:32.682 real 23m59.631s 00:38:32.682 user 38m57.487s 00:38:32.682 sys 3m59.271s 00:38:32.682 19:05:33 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:32.682 19:05:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:32.682 ************************************ 00:38:32.682 END TEST bdev_raid 00:38:32.682 ************************************ 00:38:32.682 19:05:33 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:38:32.682 19:05:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:32.682 19:05:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:32.682 19:05:33 -- common/autotest_common.sh@10 -- # set +x 00:38:32.682 ************************************ 00:38:32.682 START TEST bdevperf_config 00:38:32.682 ************************************ 00:38:32.682 19:05:33 bdevperf_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:38:32.946 * Looking for test storage... 00:38:32.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:38:32.946 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:38:32.946 19:05:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:38:32.946 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:38:32.947 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:38:32.947 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:38:32.947 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:32.947 19:05:33 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:38.215 19:05:38 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 19:05:33.452622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:38.215 [2024-07-25 19:05:33.452845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163073 ] 00:38:38.215 Using job config with 4 jobs 00:38:38.215 [2024-07-25 19:05:33.631570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.215 [2024-07-25 19:05:33.900499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.215 cpumask for '\''job0'\'' is too big 00:38:38.215 cpumask for '\''job1'\'' is too big 00:38:38.215 cpumask for '\''job2'\'' is too big 00:38:38.215 cpumask for '\''job3'\'' is too big 00:38:38.215 Running I/O for 2 seconds... 00:38:38.215 00:38:38.215 Latency(us) 00:38:38.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34881.56 34.06 0.00 0.00 7333.30 1412.14 11609.23 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34859.60 34.04 0.00 0.00 7326.14 1341.93 10236.10 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34838.44 34.02 0.00 0.00 7318.93 1380.94 8862.96 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.02 34911.56 34.09 0.00 0.00 7291.56 647.56 7770.70 00:38:38.215 =================================================================================================================== 00:38:38.215 Total : 139491.17 136.22 0.00 0.00 7317.46 647.56 11609.23' 00:38:38.215 19:05:38 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 19:05:33.452622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:38.215 [2024-07-25 19:05:33.452845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163073 ] 00:38:38.215 Using job config with 4 jobs 00:38:38.215 [2024-07-25 19:05:33.631570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.215 [2024-07-25 19:05:33.900499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.215 cpumask for '\''job0'\'' is too big 00:38:38.215 cpumask for '\''job1'\'' is too big 00:38:38.215 cpumask for '\''job2'\'' is too big 00:38:38.215 cpumask for '\''job3'\'' is too big 00:38:38.215 Running I/O for 2 seconds... 00:38:38.215 00:38:38.215 Latency(us) 00:38:38.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34881.56 34.06 0.00 0.00 7333.30 1412.14 11609.23 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34859.60 34.04 0.00 0.00 7326.14 1341.93 10236.10 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34838.44 34.02 0.00 0.00 7318.93 1380.94 8862.96 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.02 34911.56 34.09 0.00 0.00 7291.56 647.56 7770.70 00:38:38.215 =================================================================================================================== 00:38:38.215 Total : 139491.17 136.22 0.00 0.00 7317.46 647.56 11609.23' 00:38:38.215 19:05:38 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:38:38.215 19:05:38 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 19:05:33.452622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:38.215 [2024-07-25 19:05:33.452845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163073 ] 00:38:38.215 Using job config with 4 jobs 00:38:38.215 [2024-07-25 19:05:33.631570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.215 [2024-07-25 19:05:33.900499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.215 cpumask for '\''job0'\'' is too big 00:38:38.215 cpumask for '\''job1'\'' is too big 00:38:38.215 cpumask for '\''job2'\'' is too big 00:38:38.215 cpumask for '\''job3'\'' is too big 00:38:38.215 Running I/O for 2 seconds... 00:38:38.215 00:38:38.215 Latency(us) 00:38:38.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34881.56 34.06 0.00 0.00 7333.30 1412.14 11609.23 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34859.60 34.04 0.00 0.00 7326.14 1341.93 10236.10 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.01 34838.44 34.02 0.00 0.00 7318.93 1380.94 8862.96 00:38:38.215 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:38.215 Malloc0 : 2.02 34911.56 34.09 0.00 0.00 7291.56 647.56 7770.70 00:38:38.215 =================================================================================================================== 00:38:38.215 Total : 139491.17 136.22 0.00 0.00 7317.46 647.56 11609.23' 00:38:38.215 19:05:38 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:38:38.215 19:05:38 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:38:38.215 19:05:38 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:38.215 [2024-07-25 19:05:38.406283] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:38.215 [2024-07-25 19:05:38.406515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163129 ] 00:38:38.215 [2024-07-25 19:05:38.591495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.474 [2024-07-25 19:05:38.867829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.041 cpumask for 'job0' is too big 00:38:39.041 cpumask for 'job1' is too big 00:38:39.041 cpumask for 'job2' is too big 00:38:39.041 cpumask for 'job3' is too big 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:38:43.225 Running I/O for 2 seconds... 00:38:43.225 00:38:43.225 Latency(us) 00:38:43.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.225 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:43.225 Malloc0 : 2.01 34832.11 34.02 0.00 0.00 7343.17 1490.16 11609.23 00:38:43.225 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:43.225 Malloc0 : 2.02 34810.33 33.99 0.00 0.00 7336.35 1380.94 10236.10 00:38:43.225 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:43.225 Malloc0 : 2.02 34789.44 33.97 0.00 0.00 7328.45 1365.33 8862.96 00:38:43.225 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:38:43.225 Malloc0 : 2.02 34768.09 33.95 0.00 0.00 7321.29 1380.94 7989.15 00:38:43.225 =================================================================================================================== 00:38:43.225 Total : 139199.97 135.94 0.00 0.00 7332.32 1365.33 11609.23' 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:38:43.225 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:38:43.225 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:38:43.225 19:05:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:38:43.226 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:43.226 19:05:43 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 19:05:43.363962] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:48.494 [2024-07-25 19:05:43.364194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163192 ] 00:38:48.494 Using job config with 3 jobs 00:38:48.494 [2024-07-25 19:05:43.547879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.494 [2024-07-25 19:05:43.821353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.494 cpumask for '\''job0'\'' is too big 00:38:48.494 cpumask for '\''job1'\'' is too big 00:38:48.494 cpumask for '\''job2'\'' is too big 00:38:48.494 Running I/O for 2 seconds... 00:38:48.494 00:38:48.494 Latency(us) 00:38:48.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 47005.85 45.90 0.00 0.00 5442.28 1443.35 8301.23 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 46976.42 45.88 0.00 0.00 5436.88 1318.52 6959.30 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 46948.01 45.85 0.00 0.00 5431.59 1341.93 5898.24 00:38:48.494 =================================================================================================================== 00:38:48.494 Total : 140930.29 137.63 0.00 0.00 5436.92 1318.52 8301.23' 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 19:05:43.363962] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:48.494 [2024-07-25 19:05:43.364194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163192 ] 00:38:48.494 Using job config with 3 jobs 00:38:48.494 [2024-07-25 19:05:43.547879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.494 [2024-07-25 19:05:43.821353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.494 cpumask for '\''job0'\'' is too big 00:38:48.494 cpumask for '\''job1'\'' is too big 00:38:48.494 cpumask for '\''job2'\'' is too big 00:38:48.494 Running I/O for 2 seconds... 00:38:48.494 00:38:48.494 Latency(us) 00:38:48.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 47005.85 45.90 0.00 0.00 5442.28 1443.35 8301.23 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 46976.42 45.88 0.00 0.00 5436.88 1318.52 6959.30 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 46948.01 45.85 0.00 0.00 5431.59 1341.93 5898.24 00:38:48.494 =================================================================================================================== 00:38:48.494 Total : 140930.29 137.63 0.00 0.00 5436.92 1318.52 8301.23' 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 19:05:43.363962] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:48.494 [2024-07-25 19:05:43.364194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163192 ] 00:38:48.494 Using job config with 3 jobs 00:38:48.494 [2024-07-25 19:05:43.547879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.494 [2024-07-25 19:05:43.821353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.494 cpumask for '\''job0'\'' is too big 00:38:48.494 cpumask for '\''job1'\'' is too big 00:38:48.494 cpumask for '\''job2'\'' is too big 00:38:48.494 Running I/O for 2 seconds... 00:38:48.494 00:38:48.494 Latency(us) 00:38:48.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 47005.85 45.90 0.00 0.00 5442.28 1443.35 8301.23 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 46976.42 45.88 0.00 0.00 5436.88 1318.52 6959.30 00:38:48.494 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:38:48.494 Malloc0 : 2.01 46948.01 45.85 0.00 0.00 5431.59 1341.93 5898.24 00:38:48.494 =================================================================================================================== 00:38:48.494 Total : 140930.29 137.63 0.00 0.00 5436.92 1318.52 8301.23' 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:38:48.494 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:38:48.494 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:48.494 00:38:48.494 19:05:48 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:38:48.495 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:38:48.495 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:38:48.495 19:05:48 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 19:05:48.347072] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:53.770 [2024-07-25 19:05:48.347309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163261 ] 00:38:53.770 Using job config with 4 jobs 00:38:53.770 [2024-07-25 19:05:48.534510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.770 [2024-07-25 19:05:48.819913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.770 cpumask for '\''job0'\'' is too big 00:38:53.770 cpumask for '\''job1'\'' is too big 00:38:53.770 cpumask for '\''job2'\'' is too big 00:38:53.770 cpumask for '\''job3'\'' is too big 00:38:53.770 Running I/O for 2 seconds... 00:38:53.770 00:38:53.770 Latency(us) 00:38:53.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.03 17166.35 16.76 0.00 0.00 14900.39 3167.57 39196.77 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.03 17155.30 16.75 0.00 0.00 14898.41 3682.50 38947.11 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.04 17177.22 16.77 0.00 0.00 14733.12 2933.52 21346.01 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.04 17166.54 16.76 0.00 0.00 14731.84 3401.63 21221.18 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.04 17156.37 16.75 0.00 0.00 14701.57 3042.74 18100.42 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.05 17145.66 16.74 0.00 0.00 14699.98 3464.05 18100.42 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.05 17135.43 16.73 0.00 0.00 14669.96 2949.12 15603.81 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.05 17124.90 16.72 0.00 0.00 14668.79 3464.05 15603.81 00:38:53.770 =================================================================================================================== 00:38:53.770 Total : 137227.78 134.01 0.00 0.00 14750.24 2933.52 39196.77' 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 19:05:48.347072] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:53.770 [2024-07-25 19:05:48.347309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163261 ] 00:38:53.770 Using job config with 4 jobs 00:38:53.770 [2024-07-25 19:05:48.534510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.770 [2024-07-25 19:05:48.819913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.770 cpumask for '\''job0'\'' is too big 00:38:53.770 cpumask for '\''job1'\'' is too big 00:38:53.770 cpumask for '\''job2'\'' is too big 00:38:53.770 cpumask for '\''job3'\'' is too big 00:38:53.770 Running I/O for 2 seconds... 00:38:53.770 00:38:53.770 Latency(us) 00:38:53.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.03 17166.35 16.76 0.00 0.00 14900.39 3167.57 39196.77 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.03 17155.30 16.75 0.00 0.00 14898.41 3682.50 38947.11 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.04 17177.22 16.77 0.00 0.00 14733.12 2933.52 21346.01 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.04 17166.54 16.76 0.00 0.00 14731.84 3401.63 21221.18 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.04 17156.37 16.75 0.00 0.00 14701.57 3042.74 18100.42 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.05 17145.66 16.74 0.00 0.00 14699.98 3464.05 18100.42 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.05 17135.43 16.73 0.00 0.00 14669.96 2949.12 15603.81 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.05 17124.90 16.72 0.00 0.00 14668.79 3464.05 15603.81 00:38:53.770 =================================================================================================================== 00:38:53.770 Total : 137227.78 134.01 0.00 0.00 14750.24 2933.52 39196.77' 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 19:05:48.347072] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:53.770 [2024-07-25 19:05:48.347309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163261 ] 00:38:53.770 Using job config with 4 jobs 00:38:53.770 [2024-07-25 19:05:48.534510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.770 [2024-07-25 19:05:48.819913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.770 cpumask for '\''job0'\'' is too big 00:38:53.770 cpumask for '\''job1'\'' is too big 00:38:53.770 cpumask for '\''job2'\'' is too big 00:38:53.770 cpumask for '\''job3'\'' is too big 00:38:53.770 Running I/O for 2 seconds... 00:38:53.770 00:38:53.770 Latency(us) 00:38:53.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.03 17166.35 16.76 0.00 0.00 14900.39 3167.57 39196.77 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.03 17155.30 16.75 0.00 0.00 14898.41 3682.50 38947.11 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.04 17177.22 16.77 0.00 0.00 14733.12 2933.52 21346.01 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.04 17166.54 16.76 0.00 0.00 14731.84 3401.63 21221.18 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.04 17156.37 16.75 0.00 0.00 14701.57 3042.74 18100.42 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.05 17145.66 16.74 0.00 0.00 14699.98 3464.05 18100.42 00:38:53.770 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc0 : 2.05 17135.43 16.73 0.00 0.00 14669.96 2949.12 15603.81 00:38:53.770 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:38:53.770 Malloc1 : 2.05 17124.90 16.72 0.00 0.00 14668.79 3464.05 15603.81 00:38:53.770 =================================================================================================================== 00:38:53.770 Total : 137227.78 134.01 0.00 0.00 14750.24 2933.52 39196.77' 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:38:53.770 19:05:53 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:38:53.770 00:38:53.770 real 0m20.059s 00:38:53.771 user 0m17.716s 00:38:53.771 sys 0m1.743s 00:38:53.771 19:05:53 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:53.771 19:05:53 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:38:53.771 ************************************ 00:38:53.771 END TEST bdevperf_config 00:38:53.771 ************************************ 00:38:53.771 19:05:53 -- spdk/autotest.sh@196 -- # uname -s 00:38:53.771 19:05:53 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:38:53.771 19:05:53 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:53.771 19:05:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:53.771 19:05:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:53.771 19:05:53 -- common/autotest_common.sh@10 -- # set +x 00:38:53.771 ************************************ 00:38:53.771 START TEST reactor_set_interrupt 00:38:53.771 ************************************ 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:53.771 * Looking for test storage... 00:38:53.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.771 19:05:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:38:53.771 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:38:53.771 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.771 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.771 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:38:53.771 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:53.771 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:38:53.771 19:05:53 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:38:53.771 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:53.771 19:05:53 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:38:53.772 #define SPDK_CONFIG_H 00:38:53.772 #define SPDK_CONFIG_APPS 1 00:38:53.772 #define SPDK_CONFIG_ARCH native 00:38:53.772 #define SPDK_CONFIG_ASAN 1 00:38:53.772 #undef SPDK_CONFIG_AVAHI 00:38:53.772 #undef SPDK_CONFIG_CET 00:38:53.772 #define SPDK_CONFIG_COVERAGE 1 00:38:53.772 #define SPDK_CONFIG_CROSS_PREFIX 00:38:53.772 #undef SPDK_CONFIG_CRYPTO 00:38:53.772 #undef SPDK_CONFIG_CRYPTO_MLX5 00:38:53.772 #undef SPDK_CONFIG_CUSTOMOCF 00:38:53.772 #undef SPDK_CONFIG_DAOS 00:38:53.772 #define SPDK_CONFIG_DAOS_DIR 00:38:53.772 #define SPDK_CONFIG_DEBUG 1 00:38:53.772 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:38:53.772 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:38:53.772 #define SPDK_CONFIG_DPDK_INC_DIR 00:38:53.772 #define SPDK_CONFIG_DPDK_LIB_DIR 00:38:53.772 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:38:53.772 #undef SPDK_CONFIG_DPDK_UADK 00:38:53.772 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:53.772 #define SPDK_CONFIG_EXAMPLES 1 00:38:53.772 #undef SPDK_CONFIG_FC 00:38:53.772 #define SPDK_CONFIG_FC_PATH 00:38:53.772 #define SPDK_CONFIG_FIO_PLUGIN 1 00:38:53.772 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:38:53.772 #undef SPDK_CONFIG_FUSE 00:38:53.772 #undef SPDK_CONFIG_FUZZER 00:38:53.772 #define SPDK_CONFIG_FUZZER_LIB 00:38:53.772 #undef SPDK_CONFIG_GOLANG 00:38:53.772 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:38:53.772 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:38:53.772 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:38:53.772 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:38:53.772 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:38:53.772 #undef SPDK_CONFIG_HAVE_LIBBSD 00:38:53.772 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:38:53.772 #define SPDK_CONFIG_IDXD 1 00:38:53.772 #undef SPDK_CONFIG_IDXD_KERNEL 00:38:53.772 #undef SPDK_CONFIG_IPSEC_MB 00:38:53.772 #define SPDK_CONFIG_IPSEC_MB_DIR 00:38:53.772 #define SPDK_CONFIG_ISAL 1 00:38:53.772 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:38:53.772 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:38:53.772 #define SPDK_CONFIG_LIBDIR 00:38:53.772 #undef SPDK_CONFIG_LTO 00:38:53.772 #define SPDK_CONFIG_MAX_LCORES 128 00:38:53.772 #define SPDK_CONFIG_NVME_CUSE 1 00:38:53.772 #undef SPDK_CONFIG_OCF 00:38:53.772 #define SPDK_CONFIG_OCF_PATH 00:38:53.772 #define SPDK_CONFIG_OPENSSL_PATH 00:38:53.772 #undef SPDK_CONFIG_PGO_CAPTURE 00:38:53.772 #define SPDK_CONFIG_PGO_DIR 00:38:53.772 #undef SPDK_CONFIG_PGO_USE 00:38:53.772 #define SPDK_CONFIG_PREFIX /usr/local 00:38:53.772 #define SPDK_CONFIG_RAID5F 1 00:38:53.772 #undef SPDK_CONFIG_RBD 00:38:53.772 #define SPDK_CONFIG_RDMA 1 00:38:53.772 #define SPDK_CONFIG_RDMA_PROV verbs 00:38:53.772 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:38:53.772 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:38:53.772 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:38:53.772 #undef SPDK_CONFIG_SHARED 00:38:53.772 #undef SPDK_CONFIG_SMA 00:38:53.772 #define SPDK_CONFIG_TESTS 1 00:38:53.772 #undef SPDK_CONFIG_TSAN 00:38:53.772 #undef SPDK_CONFIG_UBLK 00:38:53.772 #define SPDK_CONFIG_UBSAN 1 00:38:53.772 #define SPDK_CONFIG_UNIT_TESTS 1 00:38:53.772 #undef SPDK_CONFIG_URING 00:38:53.772 #define SPDK_CONFIG_URING_PATH 00:38:53.772 #undef SPDK_CONFIG_URING_ZNS 00:38:53.772 #undef SPDK_CONFIG_USDT 00:38:53.772 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:38:53.772 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:38:53.772 #undef SPDK_CONFIG_VFIO_USER 00:38:53.772 #define SPDK_CONFIG_VFIO_USER_DIR 00:38:53.772 #define SPDK_CONFIG_VHOST 1 00:38:53.772 #define SPDK_CONFIG_VIRTIO 1 00:38:53.772 #undef SPDK_CONFIG_VTUNE 00:38:53.772 #define SPDK_CONFIG_VTUNE_DIR 00:38:53.772 #define SPDK_CONFIG_WERROR 1 00:38:53.772 #define SPDK_CONFIG_WPDK_DIR 00:38:53.772 #undef SPDK_CONFIG_XNVME 00:38:53.772 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:53.772 19:05:53 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.772 19:05:53 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:53.772 19:05:53 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:53.772 19:05:53 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:53.772 19:05:53 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:38:53.772 19:05:53 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:38:53.772 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:38:53.773 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN= 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN= 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 163365 ]] 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 163365 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.oIRDmZ 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.oIRDmZ/tests/interrupt /tmp/spdk.oIRDmZ 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=1248956416 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253683200 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4726784 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda1 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=9900318720 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=20616794112 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=10699698176 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6263693312 00:38:53.774 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6268403712 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4710400 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=5242880 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5242880 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda15 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=103061504 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=109395968 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=6334464 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=1253675008 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253679104 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=97191743488 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=2511036416 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:38:53.775 * Looking for test storage... 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=9900318720 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ ext4 == tmpfs ]] 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ ext4 == ramfs ]] 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=12914290688 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=163408 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 163408 /var/tmp/spdk.sock 00:38:53.775 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 163408 ']' 00:38:53.775 19:05:53 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:53.776 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.776 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:53.776 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.776 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:53.776 19:05:53 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:53.776 [2024-07-25 19:05:53.721387] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:53.776 [2024-07-25 19:05:53.722496] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163408 ] 00:38:53.776 [2024-07-25 19:05:53.919962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:53.776 [2024-07-25 19:05:54.156481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.776 [2024-07-25 19:05:54.156617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.776 [2024-07-25 19:05:54.156619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:54.035 [2024-07-25 19:05:54.519665] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:54.294 19:05:54 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:54.294 19:05:54 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:38:54.294 19:05:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:38:54.294 19:05:54 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:54.553 Malloc0 00:38:54.553 Malloc1 00:38:54.553 Malloc2 00:38:54.553 19:05:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:38:54.553 19:05:54 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:38:54.553 19:05:54 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:54.553 19:05:54 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:54.553 5000+0 records in 00:38:54.553 5000+0 records out 00:38:54.553 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0312779 s, 327 MB/s 00:38:54.553 19:05:55 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:54.812 AIO0 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 163408 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 163408 without_thd 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=163408 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:54.812 19:05:55 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:38:55.072 19:05:55 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:38:55.331 19:05:55 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:38:55.332 spdk_thread ids are 1 on reactor0. 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 163408 0 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163408 0 idle 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163408 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:55.332 19:05:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163408 -w 256 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163408 root 20 0 20.1t 151932 31904 S 0.0 1.2 0:00.94 reactor_0' 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163408 root 20 0 20.1t 151932 31904 S 0.0 1.2 0:00.94 reactor_0 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:55.591 19:05:55 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 163408 1 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163408 1 idle 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163408 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163408 -w 256 00:38:55.592 19:05:55 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163421 root 20 0 20.1t 151932 31904 S 0.0 1.2 0:00.00 reactor_1' 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163421 root 20 0 20.1t 151932 31904 S 0.0 1.2 0:00.00 reactor_1 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 163408 2 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163408 2 idle 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163408 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163408 -w 256 00:38:55.592 19:05:56 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163422 root 20 0 20.1t 151932 31904 S 0.0 1.2 0:00.00 reactor_2' 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163422 root 20 0 20.1t 151932 31904 S 0.0 1.2 0:00.00 reactor_2 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:38:55.851 19:05:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:38:55.852 19:05:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:38:56.111 [2024-07-25 19:05:56.450619] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:56.111 19:05:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:38:56.370 [2024-07-25 19:05:56.710310] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:38:56.370 [2024-07-25 19:05:56.711179] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:56.370 19:05:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:38:56.629 [2024-07-25 19:05:56.990125] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:38:56.629 [2024-07-25 19:05:56.990799] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 163408 0 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 163408 0 busy 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163408 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163408 -w 256 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163408 root 20 0 20.1t 152040 31904 R 99.9 1.2 0:01.41 reactor_0' 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163408 root 20 0 20.1t 152040 31904 R 99.9 1.2 0:01.41 reactor_0 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:38:56.629 19:05:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 163408 2 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 163408 2 busy 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163408 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163408 -w 256 00:38:56.630 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163422 root 20 0 20.1t 152040 31904 R 99.9 1.2 0:00.35 reactor_2' 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163422 root 20 0 20.1t 152040 31904 R 99.9 1.2 0:00.35 reactor_2 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:56.889 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:38:57.148 [2024-07-25 19:05:57.602144] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:38:57.148 [2024-07-25 19:05:57.602557] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 163408 2 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163408 2 idle 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163408 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163408 -w 256 00:38:57.148 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163422 root 20 0 20.1t 152104 31904 S 0.0 1.2 0:00.61 reactor_2' 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163422 root 20 0 20.1t 152104 31904 S 0.0 1.2 0:00.61 reactor_2 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:38:57.407 [2024-07-25 19:05:57.958115] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:38:57.407 [2024-07-25 19:05:57.958567] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:38:57.407 19:05:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:38:57.717 [2024-07-25 19:05:58.134595] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 163408 0 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163408 0 idle 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163408 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163408 -w 256 00:38:57.717 19:05:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163408 root 20 0 20.1t 152192 31904 S 6.7 1.2 0:02.20 reactor_0' 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163408 root 20 0 20.1t 152192 31904 S 6.7 1.2 0:02.20 reactor_0 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:38:57.977 19:05:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 163408 00:38:57.977 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 163408 ']' 00:38:57.977 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 163408 00:38:57.977 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:38:57.977 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:57.977 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163408 00:38:57.978 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:57.978 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:57.978 killing process with pid 163408 00:38:57.978 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163408' 00:38:57.978 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 163408 00:38:57.978 19:05:58 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 163408 00:38:59.885 19:05:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:38:59.885 19:05:59 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:59.885 19:06:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:38:59.885 19:06:00 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.885 19:06:00 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:38:59.885 19:06:00 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=163564 00:38:59.885 19:06:00 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:59.885 19:06:00 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:59.885 19:06:00 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 163564 /var/tmp/spdk.sock 00:38:59.885 19:06:00 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 163564 ']' 00:38:59.885 19:06:00 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.885 19:06:00 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:59.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.885 19:06:00 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.885 19:06:00 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:59.885 19:06:00 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:59.885 [2024-07-25 19:06:00.063421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:59.885 [2024-07-25 19:06:00.063608] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163564 ] 00:38:59.885 [2024-07-25 19:06:00.239332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:00.144 [2024-07-25 19:06:00.482564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.144 [2024-07-25 19:06:00.482756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.144 [2024-07-25 19:06:00.482779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.403 [2024-07-25 19:06:00.849135] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:00.663 19:06:01 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:00.663 19:06:01 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:39:00.663 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:39:00.663 19:06:01 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:00.922 Malloc0 00:39:00.922 Malloc1 00:39:00.922 Malloc2 00:39:00.922 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:39:00.922 19:06:01 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:39:00.922 19:06:01 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:00.922 19:06:01 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:39:00.922 5000+0 records in 00:39:00.922 5000+0 records out 00:39:00.922 10240000 bytes (10 MB, 9.8 MiB) copied, 0.02257 s, 454 MB/s 00:39:00.922 19:06:01 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:39:01.180 AIO0 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 163564 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 163564 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=163564 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:39:01.180 19:06:01 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:39:01.439 19:06:01 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:39:01.698 spdk_thread ids are 1 on reactor0. 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 163564 0 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163564 0 idle 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163564 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163564 -w 256 00:39:01.698 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163564 root 20 0 20.1t 151892 31928 S 0.0 1.2 0:00.89 reactor_0' 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163564 root 20 0 20.1t 151892 31928 S 0.0 1.2 0:00.89 reactor_0 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 163564 1 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163564 1 idle 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163564 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163564 -w 256 00:39:01.957 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163567 root 20 0 20.1t 151892 31928 S 0.0 1.2 0:00.00 reactor_1' 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163567 root 20 0 20.1t 151892 31928 S 0.0 1.2 0:00.00 reactor_1 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 163564 2 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163564 2 idle 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163564 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163564 -w 256 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163568 root 20 0 20.1t 151892 31928 S 0.0 1.2 0:00.00 reactor_2' 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163568 root 20 0 20.1t 151892 31928 S 0.0 1.2 0:00.00 reactor_2 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:39:02.216 19:06:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:39:02.475 [2024-07-25 19:06:03.000418] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:39:02.475 [2024-07-25 19:06:03.001087] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:39:02.475 [2024-07-25 19:06:03.001540] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:02.475 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:39:02.733 [2024-07-25 19:06:03.296169] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:39:02.733 [2024-07-25 19:06:03.296746] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 163564 0 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 163564 0 busy 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163564 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163564 -w 256 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163564 root 20 0 20.1t 152004 31928 R 99.9 1.2 0:01.39 reactor_0' 00:39:03.077 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163564 root 20 0 20.1t 152004 31928 R 99.9 1.2 0:01.39 reactor_0 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 163564 2 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 163564 2 busy 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163564 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163564 -w 256 00:39:03.078 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163568 root 20 0 20.1t 152004 31928 R 99.9 1.2 0:00.36 reactor_2' 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163568 root 20 0 20.1t 152004 31928 R 99.9 1.2 0:00.36 reactor_2 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:39:03.337 [2024-07-25 19:06:03.876702] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:39:03.337 [2024-07-25 19:06:03.877062] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 163564 2 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163564 2 idle 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163564 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163564 -w 256 00:39:03.337 19:06:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163568 root 20 0 20.1t 152052 31928 S 0.0 1.2 0:00.58 reactor_2' 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163568 root 20 0 20.1t 152052 31928 S 0.0 1.2 0:00.58 reactor_2 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:03.596 19:06:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:39:03.855 [2024-07-25 19:06:04.236734] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:39:03.855 [2024-07-25 19:06:04.237147] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:39:03.855 [2024-07-25 19:06:04.237192] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 163564 0 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 163564 0 idle 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=163564 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:03.855 19:06:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:03.856 19:06:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:03.856 19:06:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 163564 -w 256 00:39:03.856 19:06:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 163564 root 20 0 20.1t 152080 31928 S 0.0 1.2 0:02.15 reactor_0' 00:39:03.856 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:03.856 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 163564 root 20 0 20.1t 152080 31928 S 0.0 1.2 0:02.15 reactor_0 00:39:03.856 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:39:04.115 19:06:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 163564 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 163564 ']' 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 163564 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163564 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163564' 00:39:04.115 killing process with pid 163564 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 163564 00:39:04.115 19:06:04 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 163564 00:39:06.025 19:06:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:39:06.025 19:06:06 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:39:06.025 00:39:06.025 real 0m12.762s 00:39:06.025 user 0m12.535s 00:39:06.025 sys 0m2.647s 00:39:06.025 19:06:06 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:06.025 19:06:06 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:06.025 ************************************ 00:39:06.025 END TEST reactor_set_interrupt 00:39:06.025 ************************************ 00:39:06.025 19:06:06 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:39:06.025 19:06:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:06.025 19:06:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:06.025 19:06:06 -- common/autotest_common.sh@10 -- # set +x 00:39:06.025 ************************************ 00:39:06.025 START TEST reap_unregistered_poller 00:39:06.025 ************************************ 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:39:06.025 * Looking for test storage... 00:39:06.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.025 19:06:06 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:39:06.025 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:39:06.025 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.025 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.025 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:39:06.025 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:06.025 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:39:06.025 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:39:06.025 19:06:06 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:39:06.025 19:06:06 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:39:06.025 19:06:06 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:39:06.025 19:06:06 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:39:06.025 19:06:06 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:39:06.025 19:06:06 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:39:06.026 19:06:06 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:39:06.026 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:39:06.026 19:06:06 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:39:06.026 #define SPDK_CONFIG_H 00:39:06.026 #define SPDK_CONFIG_APPS 1 00:39:06.026 #define SPDK_CONFIG_ARCH native 00:39:06.026 #define SPDK_CONFIG_ASAN 1 00:39:06.026 #undef SPDK_CONFIG_AVAHI 00:39:06.026 #undef SPDK_CONFIG_CET 00:39:06.026 #define SPDK_CONFIG_COVERAGE 1 00:39:06.026 #define SPDK_CONFIG_CROSS_PREFIX 00:39:06.026 #undef SPDK_CONFIG_CRYPTO 00:39:06.026 #undef SPDK_CONFIG_CRYPTO_MLX5 00:39:06.026 #undef SPDK_CONFIG_CUSTOMOCF 00:39:06.026 #undef SPDK_CONFIG_DAOS 00:39:06.026 #define SPDK_CONFIG_DAOS_DIR 00:39:06.026 #define SPDK_CONFIG_DEBUG 1 00:39:06.026 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:39:06.026 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:39:06.026 #define SPDK_CONFIG_DPDK_INC_DIR 00:39:06.026 #define SPDK_CONFIG_DPDK_LIB_DIR 00:39:06.026 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:39:06.026 #undef SPDK_CONFIG_DPDK_UADK 00:39:06.026 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:06.026 #define SPDK_CONFIG_EXAMPLES 1 00:39:06.026 #undef SPDK_CONFIG_FC 00:39:06.026 #define SPDK_CONFIG_FC_PATH 00:39:06.026 #define SPDK_CONFIG_FIO_PLUGIN 1 00:39:06.026 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:39:06.027 #undef SPDK_CONFIG_FUSE 00:39:06.027 #undef SPDK_CONFIG_FUZZER 00:39:06.027 #define SPDK_CONFIG_FUZZER_LIB 00:39:06.027 #undef SPDK_CONFIG_GOLANG 00:39:06.027 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:39:06.027 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:39:06.027 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:39:06.027 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:39:06.027 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:39:06.027 #undef SPDK_CONFIG_HAVE_LIBBSD 00:39:06.027 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:39:06.027 #define SPDK_CONFIG_IDXD 1 00:39:06.027 #undef SPDK_CONFIG_IDXD_KERNEL 00:39:06.027 #undef SPDK_CONFIG_IPSEC_MB 00:39:06.027 #define SPDK_CONFIG_IPSEC_MB_DIR 00:39:06.027 #define SPDK_CONFIG_ISAL 1 00:39:06.027 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:39:06.027 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:39:06.027 #define SPDK_CONFIG_LIBDIR 00:39:06.027 #undef SPDK_CONFIG_LTO 00:39:06.027 #define SPDK_CONFIG_MAX_LCORES 128 00:39:06.027 #define SPDK_CONFIG_NVME_CUSE 1 00:39:06.027 #undef SPDK_CONFIG_OCF 00:39:06.027 #define SPDK_CONFIG_OCF_PATH 00:39:06.027 #define SPDK_CONFIG_OPENSSL_PATH 00:39:06.027 #undef SPDK_CONFIG_PGO_CAPTURE 00:39:06.027 #define SPDK_CONFIG_PGO_DIR 00:39:06.027 #undef SPDK_CONFIG_PGO_USE 00:39:06.027 #define SPDK_CONFIG_PREFIX /usr/local 00:39:06.027 #define SPDK_CONFIG_RAID5F 1 00:39:06.027 #undef SPDK_CONFIG_RBD 00:39:06.027 #define SPDK_CONFIG_RDMA 1 00:39:06.027 #define SPDK_CONFIG_RDMA_PROV verbs 00:39:06.027 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:39:06.027 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:39:06.027 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:39:06.027 #undef SPDK_CONFIG_SHARED 00:39:06.027 #undef SPDK_CONFIG_SMA 00:39:06.027 #define SPDK_CONFIG_TESTS 1 00:39:06.027 #undef SPDK_CONFIG_TSAN 00:39:06.027 #undef SPDK_CONFIG_UBLK 00:39:06.027 #define SPDK_CONFIG_UBSAN 1 00:39:06.027 #define SPDK_CONFIG_UNIT_TESTS 1 00:39:06.027 #undef SPDK_CONFIG_URING 00:39:06.027 #define SPDK_CONFIG_URING_PATH 00:39:06.027 #undef SPDK_CONFIG_URING_ZNS 00:39:06.027 #undef SPDK_CONFIG_USDT 00:39:06.027 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:39:06.027 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:39:06.027 #undef SPDK_CONFIG_VFIO_USER 00:39:06.027 #define SPDK_CONFIG_VFIO_USER_DIR 00:39:06.027 #define SPDK_CONFIG_VHOST 1 00:39:06.027 #define SPDK_CONFIG_VIRTIO 1 00:39:06.027 #undef SPDK_CONFIG_VTUNE 00:39:06.027 #define SPDK_CONFIG_VTUNE_DIR 00:39:06.027 #define SPDK_CONFIG_WERROR 1 00:39:06.027 #define SPDK_CONFIG_WPDK_DIR 00:39:06.027 #undef SPDK_CONFIG_XNVME 00:39:06.027 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:39:06.027 19:06:06 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:06.027 19:06:06 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.027 19:06:06 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.027 19:06:06 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.027 19:06:06 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.027 19:06:06 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.027 19:06:06 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.027 19:06:06 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:39:06.027 19:06:06 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:39:06.027 19:06:06 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:39:06.027 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:39:06.028 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN= 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN= 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 163746 ]] 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 163746 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.hPEoqr 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.hPEoqr/tests/interrupt /tmp/spdk.hPEoqr 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=1248956416 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253683200 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4726784 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda1 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=9900273664 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=20616794112 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=10699743232 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6263693312 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6268403712 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4710400 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=5242880 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5242880 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda15 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=103061504 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=109395968 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=6334464 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=1253675008 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253679104 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=97191632896 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=2511147008 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:39:06.029 * Looking for test storage... 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=9900273664 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:39:06.029 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ ext4 == tmpfs ]] 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ ext4 == ramfs ]] 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=12914335744 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=163789 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 163789 /var/tmp/spdk.sock 00:39:06.030 19:06:06 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 163789 ']' 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:06.030 19:06:06 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:06.030 [2024-07-25 19:06:06.518759] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:06.030 [2024-07-25 19:06:06.519004] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163789 ] 00:39:06.290 [2024-07-25 19:06:06.716078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:06.549 [2024-07-25 19:06:06.956491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:06.549 [2024-07-25 19:06:06.956687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.549 [2024-07-25 19:06:06.956664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:06.808 [2024-07-25 19:06:07.335322] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:07.067 19:06:07 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:07.067 19:06:07 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:39:07.067 19:06:07 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.067 19:06:07 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:39:07.067 19:06:07 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:39:07.067 "name": "app_thread", 00:39:07.067 "id": 1, 00:39:07.067 "active_pollers": [], 00:39:07.067 "timed_pollers": [ 00:39:07.067 { 00:39:07.067 "name": "rpc_subsystem_poll_servers", 00:39:07.067 "id": 1, 00:39:07.067 "state": "waiting", 00:39:07.067 "run_count": 0, 00:39:07.067 "busy_count": 0, 00:39:07.067 "period_ticks": 8400000 00:39:07.067 } 00:39:07.067 ], 00:39:07.067 "paused_pollers": [] 00:39:07.067 }' 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:07.067 19:06:07 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:39:07.327 5000+0 records in 00:39:07.327 5000+0 records out 00:39:07.327 10240000 bytes (10 MB, 9.8 MiB) copied, 0.035478 s, 289 MB/s 00:39:07.327 19:06:07 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:39:07.586 AIO0 00:39:07.586 19:06:07 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:07.586 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:39:07.845 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:39:07.845 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.845 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:39:07.845 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:07.845 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.845 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:39:07.845 "name": "app_thread", 00:39:07.846 "id": 1, 00:39:07.846 "active_pollers": [], 00:39:07.846 "timed_pollers": [ 00:39:07.846 { 00:39:07.846 "name": "rpc_subsystem_poll_servers", 00:39:07.846 "id": 1, 00:39:07.846 "state": "waiting", 00:39:07.846 "run_count": 0, 00:39:07.846 "busy_count": 0, 00:39:07.846 "period_ticks": 8400000 00:39:07.846 } 00:39:07.846 ], 00:39:07.846 "paused_pollers": [] 00:39:07.846 }' 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:39:07.846 19:06:08 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 163789 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 163789 ']' 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 163789 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 163789 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 163789' 00:39:07.846 killing process with pid 163789 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 163789 00:39:07.846 19:06:08 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 163789 00:39:09.753 19:06:09 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:39:09.753 19:06:09 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:39:09.753 00:39:09.753 real 0m3.641s 00:39:09.753 user 0m3.102s 00:39:09.753 sys 0m0.737s 00:39:09.753 19:06:09 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:09.753 19:06:09 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:09.753 ************************************ 00:39:09.753 END TEST reap_unregistered_poller 00:39:09.753 ************************************ 00:39:09.753 19:06:09 -- spdk/autotest.sh@202 -- # uname -s 00:39:09.753 19:06:09 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:39:09.753 19:06:09 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:39:09.753 19:06:09 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:39:09.753 19:06:09 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:39:09.753 19:06:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:09.753 19:06:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:09.753 19:06:09 -- common/autotest_common.sh@10 -- # set +x 00:39:09.753 ************************************ 00:39:09.753 START TEST spdk_dd 00:39:09.753 ************************************ 00:39:09.753 19:06:09 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:39:09.753 * Looking for test storage... 00:39:09.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:09.753 19:06:09 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:09.753 19:06:09 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:09.753 19:06:09 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:09.753 19:06:09 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:09.753 19:06:09 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:09.753 19:06:09 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:09.753 19:06:09 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:09.753 19:06:09 spdk_dd -- paths/export.sh@5 -- # export PATH 00:39:09.753 19:06:09 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:09.753 19:06:09 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:10.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:10.013 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:10.949 19:06:11 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:39:10.949 19:06:11 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@230 -- # local class 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@232 -- # local progif 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@233 -- # class=01 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@15 -- # local i 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@24 -- # return 0 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:39:10.949 19:06:11 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:39:10.949 19:06:11 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@139 -- # local lib 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:39:10.949 19:06:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:39:10.949 19:06:11 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:39:10.949 19:06:11 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:39:10.949 19:06:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:10.949 19:06:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:10.949 19:06:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:10.949 ************************************ 00:39:10.950 START TEST spdk_dd_basic_rw 00:39:10.950 ************************************ 00:39:10.950 19:06:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:39:11.209 * Looking for test storage... 00:39:11.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:39:11.209 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:39:11.470 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2374 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:39:11.470 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2374 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:11.471 19:06:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:39:11.471 ************************************ 00:39:11.471 START TEST dd_bs_lt_native_bs 00:39:11.471 ************************************ 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.471 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:11.472 19:06:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:11.472 { 00:39:11.472 "subsystems": [ 00:39:11.472 { 00:39:11.472 "subsystem": "bdev", 00:39:11.472 "config": [ 00:39:11.472 { 00:39:11.472 "params": { 00:39:11.472 "trtype": "pcie", 00:39:11.472 "traddr": "0000:00:10.0", 00:39:11.472 "name": "Nvme0" 00:39:11.472 }, 00:39:11.472 "method": "bdev_nvme_attach_controller" 00:39:11.472 }, 00:39:11.472 { 00:39:11.472 "method": "bdev_wait_for_examine" 00:39:11.472 } 00:39:11.472 ] 00:39:11.472 } 00:39:11.472 ] 00:39:11.472 } 00:39:11.730 [2024-07-25 19:06:12.148425] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:11.730 [2024-07-25 19:06:12.149602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164110 ] 00:39:11.988 [2024-07-25 19:06:12.322179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.247 [2024-07-25 19:06:12.634354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.813 [2024-07-25 19:06:13.097343] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:39:12.813 [2024-07-25 19:06:13.097458] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:13.747 [2024-07-25 19:06:13.959841] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:14.007 00:39:14.007 real 0m2.430s 00:39:14.007 user 0m1.998s 00:39:14.007 sys 0m0.379s 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:39:14.007 ************************************ 00:39:14.007 END TEST dd_bs_lt_native_bs 00:39:14.007 ************************************ 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:39:14.007 ************************************ 00:39:14.007 START TEST dd_rw 00:39:14.007 ************************************ 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:14.007 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:14.575 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:39:14.575 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:14.575 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:14.575 19:06:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:14.575 { 00:39:14.575 "subsystems": [ 00:39:14.575 { 00:39:14.575 "subsystem": "bdev", 00:39:14.575 "config": [ 00:39:14.575 { 00:39:14.575 "params": { 00:39:14.575 "trtype": "pcie", 00:39:14.575 "traddr": "0000:00:10.0", 00:39:14.575 "name": "Nvme0" 00:39:14.575 }, 00:39:14.575 "method": "bdev_nvme_attach_controller" 00:39:14.575 }, 00:39:14.575 { 00:39:14.575 "method": "bdev_wait_for_examine" 00:39:14.575 } 00:39:14.575 ] 00:39:14.575 } 00:39:14.575 ] 00:39:14.575 } 00:39:14.575 [2024-07-25 19:06:15.060018] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:14.575 [2024-07-25 19:06:15.060792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164168 ] 00:39:14.834 [2024-07-25 19:06:15.246755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.093 [2024-07-25 19:06:15.489187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.040  Copying: 60/60 [kB] (average 19 MBps) 00:39:17.040 00:39:17.041 19:06:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:39:17.041 19:06:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:17.041 19:06:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:17.041 19:06:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:17.041 { 00:39:17.041 "subsystems": [ 00:39:17.041 { 00:39:17.041 "subsystem": "bdev", 00:39:17.041 "config": [ 00:39:17.041 { 00:39:17.041 "params": { 00:39:17.041 "trtype": "pcie", 00:39:17.041 "traddr": "0000:00:10.0", 00:39:17.041 "name": "Nvme0" 00:39:17.041 }, 00:39:17.041 "method": "bdev_nvme_attach_controller" 00:39:17.041 }, 00:39:17.041 { 00:39:17.041 "method": "bdev_wait_for_examine" 00:39:17.041 } 00:39:17.041 ] 00:39:17.041 } 00:39:17.041 ] 00:39:17.041 } 00:39:17.041 [2024-07-25 19:06:17.286380] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:17.041 [2024-07-25 19:06:17.286602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164204 ] 00:39:17.041 [2024-07-25 19:06:17.467102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.300 [2024-07-25 19:06:17.715594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:19.244  Copying: 60/60 [kB] (average 29 MBps) 00:39:19.244 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:19.244 19:06:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:19.244 { 00:39:19.244 "subsystems": [ 00:39:19.244 { 00:39:19.244 "subsystem": "bdev", 00:39:19.244 "config": [ 00:39:19.244 { 00:39:19.244 "params": { 00:39:19.244 "trtype": "pcie", 00:39:19.244 "traddr": "0000:00:10.0", 00:39:19.244 "name": "Nvme0" 00:39:19.244 }, 00:39:19.244 "method": "bdev_nvme_attach_controller" 00:39:19.244 }, 00:39:19.244 { 00:39:19.244 "method": "bdev_wait_for_examine" 00:39:19.244 } 00:39:19.244 ] 00:39:19.244 } 00:39:19.244 ] 00:39:19.244 } 00:39:19.244 [2024-07-25 19:06:19.633220] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:19.244 [2024-07-25 19:06:19.633444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164245 ] 00:39:19.244 [2024-07-25 19:06:19.819020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:19.503 [2024-07-25 19:06:20.062483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.446  Copying: 1024/1024 [kB] (average 333 MBps) 00:39:21.446 00:39:21.446 19:06:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:21.446 19:06:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:39:21.446 19:06:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:39:21.446 19:06:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:39:21.446 19:06:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:39:21.446 19:06:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:21.446 19:06:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:21.704 19:06:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:39:21.704 19:06:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:21.704 19:06:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:21.704 19:06:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:21.963 { 00:39:21.963 "subsystems": [ 00:39:21.963 { 00:39:21.963 "subsystem": "bdev", 00:39:21.963 "config": [ 00:39:21.963 { 00:39:21.963 "params": { 00:39:21.963 "trtype": "pcie", 00:39:21.963 "traddr": "0000:00:10.0", 00:39:21.963 "name": "Nvme0" 00:39:21.963 }, 00:39:21.963 "method": "bdev_nvme_attach_controller" 00:39:21.963 }, 00:39:21.963 { 00:39:21.963 "method": "bdev_wait_for_examine" 00:39:21.963 } 00:39:21.963 ] 00:39:21.963 } 00:39:21.963 ] 00:39:21.963 } 00:39:21.963 [2024-07-25 19:06:22.333838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:21.963 [2024-07-25 19:06:22.334059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164277 ] 00:39:21.963 [2024-07-25 19:06:22.520191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.222 [2024-07-25 19:06:22.759790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.170  Copying: 60/60 [kB] (average 29 MBps) 00:39:24.170 00:39:24.170 19:06:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:39:24.170 19:06:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:24.170 19:06:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:24.170 19:06:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:24.170 { 00:39:24.170 "subsystems": [ 00:39:24.170 { 00:39:24.170 "subsystem": "bdev", 00:39:24.170 "config": [ 00:39:24.170 { 00:39:24.170 "params": { 00:39:24.170 "trtype": "pcie", 00:39:24.170 "traddr": "0000:00:10.0", 00:39:24.170 "name": "Nvme0" 00:39:24.170 }, 00:39:24.170 "method": "bdev_nvme_attach_controller" 00:39:24.170 }, 00:39:24.170 { 00:39:24.170 "method": "bdev_wait_for_examine" 00:39:24.170 } 00:39:24.170 ] 00:39:24.170 } 00:39:24.170 ] 00:39:24.171 } 00:39:24.171 [2024-07-25 19:06:24.663305] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:24.171 [2024-07-25 19:06:24.663527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164305 ] 00:39:24.429 [2024-07-25 19:06:24.843514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.688 [2024-07-25 19:06:25.074381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.326  Copying: 60/60 [kB] (average 58 MBps) 00:39:26.326 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:26.326 19:06:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:26.326 { 00:39:26.326 "subsystems": [ 00:39:26.326 { 00:39:26.326 "subsystem": "bdev", 00:39:26.326 "config": [ 00:39:26.326 { 00:39:26.326 "params": { 00:39:26.326 "trtype": "pcie", 00:39:26.326 "traddr": "0000:00:10.0", 00:39:26.326 "name": "Nvme0" 00:39:26.326 }, 00:39:26.326 "method": "bdev_nvme_attach_controller" 00:39:26.326 }, 00:39:26.326 { 00:39:26.326 "method": "bdev_wait_for_examine" 00:39:26.326 } 00:39:26.326 ] 00:39:26.326 } 00:39:26.326 ] 00:39:26.326 } 00:39:26.326 [2024-07-25 19:06:26.875289] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:26.326 [2024-07-25 19:06:26.876012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164345 ] 00:39:26.585 [2024-07-25 19:06:27.059059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.844 [2024-07-25 19:06:27.301901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.792  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:28.792 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:28.792 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:29.051 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:39:29.051 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:29.052 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:29.052 19:06:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:29.311 { 00:39:29.311 "subsystems": [ 00:39:29.311 { 00:39:29.311 "subsystem": "bdev", 00:39:29.311 "config": [ 00:39:29.311 { 00:39:29.311 "params": { 00:39:29.311 "trtype": "pcie", 00:39:29.311 "traddr": "0000:00:10.0", 00:39:29.311 "name": "Nvme0" 00:39:29.311 }, 00:39:29.311 "method": "bdev_nvme_attach_controller" 00:39:29.311 }, 00:39:29.311 { 00:39:29.311 "method": "bdev_wait_for_examine" 00:39:29.311 } 00:39:29.311 ] 00:39:29.311 } 00:39:29.311 ] 00:39:29.311 } 00:39:29.311 [2024-07-25 19:06:29.666276] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:29.311 [2024-07-25 19:06:29.666501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164379 ] 00:39:29.311 [2024-07-25 19:06:29.854415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.570 [2024-07-25 19:06:30.103305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:31.519  Copying: 56/56 [kB] (average 27 MBps) 00:39:31.519 00:39:31.519 19:06:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:39:31.519 19:06:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:31.519 19:06:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:31.519 19:06:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:31.519 { 00:39:31.519 "subsystems": [ 00:39:31.519 { 00:39:31.519 "subsystem": "bdev", 00:39:31.519 "config": [ 00:39:31.519 { 00:39:31.519 "params": { 00:39:31.519 "trtype": "pcie", 00:39:31.519 "traddr": "0000:00:10.0", 00:39:31.519 "name": "Nvme0" 00:39:31.519 }, 00:39:31.519 "method": "bdev_nvme_attach_controller" 00:39:31.519 }, 00:39:31.519 { 00:39:31.519 "method": "bdev_wait_for_examine" 00:39:31.519 } 00:39:31.519 ] 00:39:31.519 } 00:39:31.519 ] 00:39:31.519 } 00:39:31.519 [2024-07-25 19:06:31.897841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:31.519 [2024-07-25 19:06:31.898065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164411 ] 00:39:31.519 [2024-07-25 19:06:32.079296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:31.778 [2024-07-25 19:06:32.321718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.722  Copying: 56/56 [kB] (average 27 MBps) 00:39:33.722 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:33.722 19:06:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:33.722 { 00:39:33.722 "subsystems": [ 00:39:33.722 { 00:39:33.722 "subsystem": "bdev", 00:39:33.722 "config": [ 00:39:33.722 { 00:39:33.722 "params": { 00:39:33.722 "trtype": "pcie", 00:39:33.722 "traddr": "0000:00:10.0", 00:39:33.722 "name": "Nvme0" 00:39:33.722 }, 00:39:33.722 "method": "bdev_nvme_attach_controller" 00:39:33.722 }, 00:39:33.722 { 00:39:33.722 "method": "bdev_wait_for_examine" 00:39:33.722 } 00:39:33.722 ] 00:39:33.722 } 00:39:33.722 ] 00:39:33.722 } 00:39:33.722 [2024-07-25 19:06:34.259541] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:33.722 [2024-07-25 19:06:34.259806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164447 ] 00:39:33.981 [2024-07-25 19:06:34.438342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.241 [2024-07-25 19:06:34.670750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.187  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:36.187 00:39:36.187 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:36.187 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:39:36.187 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:39:36.187 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:39:36.187 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:39:36.187 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:36.187 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:36.447 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:39:36.447 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:36.447 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:36.447 19:06:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:36.447 { 00:39:36.447 "subsystems": [ 00:39:36.447 { 00:39:36.447 "subsystem": "bdev", 00:39:36.447 "config": [ 00:39:36.447 { 00:39:36.447 "params": { 00:39:36.447 "trtype": "pcie", 00:39:36.447 "traddr": "0000:00:10.0", 00:39:36.447 "name": "Nvme0" 00:39:36.447 }, 00:39:36.447 "method": "bdev_nvme_attach_controller" 00:39:36.447 }, 00:39:36.447 { 00:39:36.447 "method": "bdev_wait_for_examine" 00:39:36.447 } 00:39:36.447 ] 00:39:36.447 } 00:39:36.447 ] 00:39:36.447 } 00:39:36.447 [2024-07-25 19:06:36.915539] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:36.447 [2024-07-25 19:06:36.915758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164478 ] 00:39:36.707 [2024-07-25 19:06:37.101732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.966 [2024-07-25 19:06:37.332977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.604  Copying: 56/56 [kB] (average 54 MBps) 00:39:38.604 00:39:38.604 19:06:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:39:38.604 19:06:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:38.604 19:06:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:38.604 19:06:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:38.863 { 00:39:38.863 "subsystems": [ 00:39:38.863 { 00:39:38.863 "subsystem": "bdev", 00:39:38.863 "config": [ 00:39:38.863 { 00:39:38.863 "params": { 00:39:38.863 "trtype": "pcie", 00:39:38.863 "traddr": "0000:00:10.0", 00:39:38.863 "name": "Nvme0" 00:39:38.863 }, 00:39:38.863 "method": "bdev_nvme_attach_controller" 00:39:38.863 }, 00:39:38.863 { 00:39:38.864 "method": "bdev_wait_for_examine" 00:39:38.864 } 00:39:38.864 ] 00:39:38.864 } 00:39:38.864 ] 00:39:38.864 } 00:39:38.864 [2024-07-25 19:06:39.246295] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:38.864 [2024-07-25 19:06:39.246507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164517 ] 00:39:38.864 [2024-07-25 19:06:39.433782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.431 [2024-07-25 19:06:39.758286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.071  Copying: 56/56 [kB] (average 54 MBps) 00:39:41.071 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:41.071 19:06:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:41.071 { 00:39:41.071 "subsystems": [ 00:39:41.071 { 00:39:41.071 "subsystem": "bdev", 00:39:41.071 "config": [ 00:39:41.071 { 00:39:41.071 "params": { 00:39:41.071 "trtype": "pcie", 00:39:41.071 "traddr": "0000:00:10.0", 00:39:41.071 "name": "Nvme0" 00:39:41.071 }, 00:39:41.071 "method": "bdev_nvme_attach_controller" 00:39:41.071 }, 00:39:41.071 { 00:39:41.071 "method": "bdev_wait_for_examine" 00:39:41.071 } 00:39:41.071 ] 00:39:41.071 } 00:39:41.071 ] 00:39:41.071 } 00:39:41.071 [2024-07-25 19:06:41.482975] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:41.071 [2024-07-25 19:06:41.483144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164552 ] 00:39:41.071 [2024-07-25 19:06:41.642306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.330 [2024-07-25 19:06:41.832066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.280  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:43.280 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:43.280 19:06:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:43.547 [2024-07-25 19:06:43.894347] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:43.547 [2024-07-25 19:06:43.894513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164583 ] 00:39:43.547 { 00:39:43.547 "subsystems": [ 00:39:43.547 { 00:39:43.547 "subsystem": "bdev", 00:39:43.547 "config": [ 00:39:43.547 { 00:39:43.547 "params": { 00:39:43.547 "trtype": "pcie", 00:39:43.547 "traddr": "0000:00:10.0", 00:39:43.547 "name": "Nvme0" 00:39:43.547 }, 00:39:43.547 "method": "bdev_nvme_attach_controller" 00:39:43.547 }, 00:39:43.547 { 00:39:43.547 "method": "bdev_wait_for_examine" 00:39:43.547 } 00:39:43.547 ] 00:39:43.547 } 00:39:43.547 ] 00:39:43.547 } 00:39:43.547 [2024-07-25 19:06:44.053465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:43.821 [2024-07-25 19:06:44.253026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.507  Copying: 48/48 [kB] (average 46 MBps) 00:39:45.507 00:39:45.507 19:06:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:39:45.507 19:06:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:45.507 19:06:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:45.507 19:06:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:45.507 { 00:39:45.507 "subsystems": [ 00:39:45.507 { 00:39:45.507 "subsystem": "bdev", 00:39:45.507 "config": [ 00:39:45.507 { 00:39:45.507 "params": { 00:39:45.507 "trtype": "pcie", 00:39:45.507 "traddr": "0000:00:10.0", 00:39:45.507 "name": "Nvme0" 00:39:45.507 }, 00:39:45.507 "method": "bdev_nvme_attach_controller" 00:39:45.507 }, 00:39:45.507 { 00:39:45.507 "method": "bdev_wait_for_examine" 00:39:45.507 } 00:39:45.507 ] 00:39:45.507 } 00:39:45.507 ] 00:39:45.507 } 00:39:45.507 [2024-07-25 19:06:45.874222] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:45.507 [2024-07-25 19:06:45.874425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164611 ] 00:39:45.507 [2024-07-25 19:06:46.054636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.766 [2024-07-25 19:06:46.249618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.712  Copying: 48/48 [kB] (average 46 MBps) 00:39:47.712 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:47.712 19:06:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:47.712 { 00:39:47.712 "subsystems": [ 00:39:47.712 { 00:39:47.712 "subsystem": "bdev", 00:39:47.712 "config": [ 00:39:47.712 { 00:39:47.712 "params": { 00:39:47.712 "trtype": "pcie", 00:39:47.712 "traddr": "0000:00:10.0", 00:39:47.712 "name": "Nvme0" 00:39:47.712 }, 00:39:47.712 "method": "bdev_nvme_attach_controller" 00:39:47.712 }, 00:39:47.712 { 00:39:47.712 "method": "bdev_wait_for_examine" 00:39:47.712 } 00:39:47.712 ] 00:39:47.712 } 00:39:47.712 ] 00:39:47.712 } 00:39:47.712 [2024-07-25 19:06:47.970981] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:47.712 [2024-07-25 19:06:47.971375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164643 ] 00:39:47.712 [2024-07-25 19:06:48.150361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.971 [2024-07-25 19:06:48.341142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.609  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:49.609 00:39:49.609 19:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:49.609 19:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:39:49.609 19:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:39:49.609 19:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:39:49.609 19:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:39:49.609 19:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:49.609 19:06:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:49.869 19:06:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:39:49.869 19:06:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:49.869 19:06:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:49.869 19:06:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:49.869 [2024-07-25 19:06:50.281461] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:49.869 [2024-07-25 19:06:50.281738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164673 ] 00:39:49.869 { 00:39:49.869 "subsystems": [ 00:39:49.869 { 00:39:49.869 "subsystem": "bdev", 00:39:49.869 "config": [ 00:39:49.869 { 00:39:49.869 "params": { 00:39:49.869 "trtype": "pcie", 00:39:49.869 "traddr": "0000:00:10.0", 00:39:49.869 "name": "Nvme0" 00:39:49.869 }, 00:39:49.869 "method": "bdev_nvme_attach_controller" 00:39:49.869 }, 00:39:49.869 { 00:39:49.869 "method": "bdev_wait_for_examine" 00:39:49.869 } 00:39:49.869 ] 00:39:49.869 } 00:39:49.869 ] 00:39:49.869 } 00:39:49.869 [2024-07-25 19:06:50.439513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:50.128 [2024-07-25 19:06:50.640103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.075  Copying: 48/48 [kB] (average 46 MBps) 00:39:52.075 00:39:52.075 19:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:39:52.075 19:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:52.075 19:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:52.075 19:06:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:52.075 { 00:39:52.075 "subsystems": [ 00:39:52.075 { 00:39:52.075 "subsystem": "bdev", 00:39:52.075 "config": [ 00:39:52.075 { 00:39:52.075 "params": { 00:39:52.075 "trtype": "pcie", 00:39:52.075 "traddr": "0000:00:10.0", 00:39:52.075 "name": "Nvme0" 00:39:52.075 }, 00:39:52.075 "method": "bdev_nvme_attach_controller" 00:39:52.075 }, 00:39:52.075 { 00:39:52.075 "method": "bdev_wait_for_examine" 00:39:52.075 } 00:39:52.075 ] 00:39:52.075 } 00:39:52.075 ] 00:39:52.075 } 00:39:52.075 [2024-07-25 19:06:52.469205] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:52.075 [2024-07-25 19:06:52.469609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164705 ] 00:39:52.075 [2024-07-25 19:06:52.646736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.335 [2024-07-25 19:06:52.883088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.296  Copying: 48/48 [kB] (average 46 MBps) 00:39:54.296 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:54.296 19:06:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:54.296 { 00:39:54.296 "subsystems": [ 00:39:54.296 { 00:39:54.296 "subsystem": "bdev", 00:39:54.296 "config": [ 00:39:54.296 { 00:39:54.296 "params": { 00:39:54.296 "trtype": "pcie", 00:39:54.296 "traddr": "0000:00:10.0", 00:39:54.296 "name": "Nvme0" 00:39:54.296 }, 00:39:54.296 "method": "bdev_nvme_attach_controller" 00:39:54.296 }, 00:39:54.296 { 00:39:54.296 "method": "bdev_wait_for_examine" 00:39:54.296 } 00:39:54.296 ] 00:39:54.296 } 00:39:54.296 ] 00:39:54.296 } 00:39:54.296 [2024-07-25 19:06:54.553299] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:54.296 [2024-07-25 19:06:54.553674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164737 ] 00:39:54.296 [2024-07-25 19:06:54.730027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.555 [2024-07-25 19:06:54.918972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.191  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:56.191 00:39:56.191 ************************************ 00:39:56.191 END TEST dd_rw 00:39:56.191 ************************************ 00:39:56.191 00:39:56.191 real 0m42.037s 00:39:56.191 user 0m34.730s 00:39:56.191 sys 0m5.992s 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:39:56.191 ************************************ 00:39:56.191 START TEST dd_rw_offset 00:39:56.191 ************************************ 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=s8drwe21dgm1si49vocmrtyo94xx2dpadsgbouu783yoe43t7am18u5ypxm7tmcbg9hy7qink891jfm8sqdn5ylt9mtpt8jr25i2cgbcdtgbxntcy4ata023qaar6az8rb1ktnwux98ukg0q135xy4uonwlkyshjg61xas6aimvj86cdjno5tdqx2anaf1rotjih4vimrx4foz45c3u0gt1qebllb58atsdq3fv3p6jattt8wu2kuohxxtoeu9lhbzmo4sou19mktzw8dyixohnlk8gx39miwfrw15kmkh1xuku55v7gwoqj76wp6i9ez5a2nxi6pkkgo50vxyvqks08liduv862k0kdozu736r15fzii9xtw1zv04kkyja3gqp1kh9oq9h5z7ggi9305bfut0nv3s5ntft5wk3v660ehfifsblm1kjcps4t0fduq27ovvmodt50bz37bfpameug5xnvs5cchhui5055k53fa5z7lj8smktpaevaj6ud1jq3tz8p3j8acf20zernlbwz53jvaythu4d8gn1nl6ines7nxbsshrsyvwnmmfiw1fbw4elz8mn7siglzsxpg70q5d1jzy9uo17qa789axm27f42vemsnkbjn2sopv7v2i9u2ya21erywuenbk06keiukvlcd86w7efskmwyceuuzjpa8ycstvkt5c4cdbdwie760koteqhye8jwc862aoc2nlzt15qiem69gqlzdzdyza8umv1pvz389jb45vehwxwayzd6jozt8ym32t1pbrmjsdko7gd4m4xtewlht0i9ij28bwxtlrob9b3coca2eizew1nkbkjuwrtiw4q1marr6v0yzd1fsvr5eck7s6dk45mp5ixmpj8w5k9n6cxib8pnzovl2cljbzo2qelo4ykd5vt0m9tmdmdxdva50or6gl31ugx4i1yi99u6u8zezl2fh75fdemaxyj24dfd2j236fg82b5xwwr8k2w58kdsm5nocj3rhld0xdm970587dkeqescnj7oqvzpykn44by3trsm3ztswr0sa3igif0lwqwot9ny3stn351y4bm6bc2n6dij6rvvvi5eixjxjo1k4xwx1i2dbscmznmm6dywnec0zlji8liqw1a4cllyr7byb4m2jnt4l4o0mdayclpq55h5vu7z7skgpi95i7qqnn2komctcutaqmcf94jvnf2kg93jzlpvdtg142grracowl107dhwsmoujiqybs64ziif4fqy56ye1xpgrint6r97x2hbjoedbbo7bu834j6en89b5hsjquw38f1yh2elw32hzkl7xqmdqybx26z25rsyk57sel9056s36z5cbs7wmzdhy550rxisyd2g8ds1hcry9m73xcn0t6bhzueucqlzk8wngby4w73u87v6tbbplzeso8ksye5z7lnw3e8rqii3hlbrgou0rp0vmcevwaq7tqaagvofe7sx5r6tcc4fts3e0lp6q1mg2f7oqkhhqp2wd9tte4zkmzs7s7ru778cru0foqc9sswaimeeycxvtce39yvcgb3r4inya22yacs6pmmwkr1xqi0j2gjxqidwgpw9hk6d7wwkd7n61zizaoilhpmh031y3ath2hufm8or3oz09ef0q6ismna30i2lbhcwqhkv02ssxlwoijle946h3dzfew6v2u1uha73s9f52zoy5vxs12nhq4wfzodsu9nq54uar0yjquyqqs48gommmlkm6duqywb03606evw5xyxfez08nm77i23wm70czxz6up4js6wxy35n7wpspn3piwqe56a5ozvdg987fwseoylthh2il2k0l0p0tj6zwv1ixodx7ak2ggbpf6ycnu943lusp0rlqd1grv31bpvid3aorrw54s6qkfo00fbuobjessb6fj3c8ocfx2u4urr49sdrefna4gypzn3yrdkmdds7zypoxb9yk12x6iyx09cg4trewjoj1ol5dl32ol1c0m2pxbwwa7tjxadwfdre11t81leix79x80xh240679n9pviaysbc1udgaw8ujkaq9ju0pgu1jd8nkmvsqqh44xdhtf5n2att115ff9y28yx62odlyjv9niqogz79v4ubo3v0ff233vfiiwob9tf4tajzr0vb2z57xvt98snncda0ppvpk4xu54d4n5u6ot5pslqc8lkc6joishlzd9z3g9v4vxe6pd2syas1frpisapn13sj94tt0e2zs0do3id43oaj3w5h8c3wagk96vkalz38xjnmc9i5p8ko0rzqoaotr5sxyq3don8xh2fiwrlkd4ntlrnge74qqamut1dob673x98izjnahq15npn0969qbhch1v1ikeuazv37a395q6orro5r1xbweemw19n713kbl68gvho6gziyqqduipdvg23bixf512mntj3816q2ic3mhbq2eqt3rf6rtvouqjh84zhkk50zyk87tyfkenm3d8j5t2agy6wxk76h35s54mocd2r7m1be6ik1feelxa0vdrztvilzf08aev6ee0q8q8pskaceq8v2jwe7phhw5wr6hfines8xc6dni4re7lisbol2gk00cvtl7rztn03zq0z69corswin5463hbjo8rd81bwq95xk0fqin6i9g76j3c3mrko367z2ff99goexesbk7hl62n5q7stxkkoe4ao66wgd6tlgexiutjdmf0enxc1yj0qdqw1qnrjbpk3how76bvhcbfsjrwoclhy2w2ozuymwhtmmpprtoya02m0kzqxjkrvfqj0otbbxfwt9niccduscqlqj5uze5tbr8tmo76xcqpsfxq0uxtebh9hqmhyoonpgnv79lf1l1mj31itoevljazofhfrvj9s7xyufsjrj92w5lxst73wxfavlqs3ts1u7gqweao4ycuv384ktpq3tebve7carsm3zidv2xxmy6qef482o60xsbcukjlggzif48ddxlr71rphxeks330t7fa234xyb4a1j5nqk4obdxv8ytxikkfaxi0s3tpsmq8etqw17pa04v1ghqrlvhj569nrz7u1uc2i22gaihio0rdiujio0yrsns29anj2ok7q214o72541q9stug4909qnaspuhp9uzq2u3ffyxx2py3joemx785ywp0kcasx4y7p37iiixfwmmxdqdikbeudtrdd8p3fxdqtto9ixez3dzsz0a2e3vlp998poqemaw2ly01t762vc2byi8dgvyanrkjko1bvrfr5r80e5antciul3wiyndfv9uzo3q49o8e7oxql9ahxz14ooo93don0c3m5uov6jq3r2z12uxh00485uaeuwfq2sn2dwjchwdwtrrrt5c3xm26gnh8yv6j54xh5ivv2hsdgjeblrgf9mx0a5fh84k5azmr7puub208w44rfjdowhyjrvifn0inlo1csuxou4ys0d8y6pqg3wfz8ekhmb4309gxrp9jop29n8gcc6h9mg7xu6188hnvgpdzxkiycbvmtfu9ujhcjifrl4g9u2pl0dxm68svo7uogg86p2jejipbf8lh6g1xahc2sjn6uzqd6c7hasf3tz86tc3e3s1z038ro2s3jzfa8g5au99jfzv2dr75ey3r1gva9gjikxkabtbyuqituy9tjc3lbiux4x453gszcyuje42ogu3dfioh98inlnjz4rclpvagy5kneut9myovl4t6vk9dwsrr78ich1xufimbnnuioic5v377hrsvxi22q4ol5e6ladc6vodzsam16mljpluvi6jyupyovdi89xva4mh7b2ciqsccbha5oc074npdwldx4lkl0vq9vp5baoo0p65up615xqoqevui80u87nmixyzypcvzpq1ek0bxj9jcxtbmpebznlr2x06wuk1whced5wtz1aby5fmjmao9ypdx9qwj5lz03al086bbl9aaz7xeggp44c7jm7hvu1s8g0b9dzztuqib48d14uginmis2lx35k1ii0as3gkq4iijm1afudqd65echlaurc7j6d9kgfw2btcl5x5epuf8fcnazk 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:39:56.191 19:06:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:39:56.191 { 00:39:56.191 "subsystems": [ 00:39:56.191 { 00:39:56.191 "subsystem": "bdev", 00:39:56.191 "config": [ 00:39:56.191 { 00:39:56.191 "params": { 00:39:56.191 "trtype": "pcie", 00:39:56.191 "traddr": "0000:00:10.0", 00:39:56.191 "name": "Nvme0" 00:39:56.191 }, 00:39:56.191 "method": "bdev_nvme_attach_controller" 00:39:56.191 }, 00:39:56.191 { 00:39:56.191 "method": "bdev_wait_for_examine" 00:39:56.191 } 00:39:56.191 ] 00:39:56.191 } 00:39:56.191 ] 00:39:56.191 } 00:39:56.191 [2024-07-25 19:06:56.761760] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:56.191 [2024-07-25 19:06:56.762245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164792 ] 00:39:56.450 [2024-07-25 19:06:56.951006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.709 [2024-07-25 19:06:57.165513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.211  Copying: 4096/4096 [B] (average 4000 kBps) 00:39:58.211 00:39:58.211 19:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:39:58.211 19:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:39:58.211 19:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:39:58.211 19:06:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:39:58.211 { 00:39:58.211 "subsystems": [ 00:39:58.211 { 00:39:58.211 "subsystem": "bdev", 00:39:58.211 "config": [ 00:39:58.211 { 00:39:58.211 "params": { 00:39:58.211 "trtype": "pcie", 00:39:58.211 "traddr": "0000:00:10.0", 00:39:58.211 "name": "Nvme0" 00:39:58.211 }, 00:39:58.211 "method": "bdev_nvme_attach_controller" 00:39:58.211 }, 00:39:58.211 { 00:39:58.211 "method": "bdev_wait_for_examine" 00:39:58.212 } 00:39:58.212 ] 00:39:58.212 } 00:39:58.212 ] 00:39:58.212 } 00:39:58.212 [2024-07-25 19:06:58.748680] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:58.212 [2024-07-25 19:06:58.748984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164827 ] 00:39:58.470 [2024-07-25 19:06:58.908785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.729 [2024-07-25 19:06:59.100051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.367  Copying: 4096/4096 [B] (average 4000 kBps) 00:40:00.367 00:40:00.367 19:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ s8drwe21dgm1si49vocmrtyo94xx2dpadsgbouu783yoe43t7am18u5ypxm7tmcbg9hy7qink891jfm8sqdn5ylt9mtpt8jr25i2cgbcdtgbxntcy4ata023qaar6az8rb1ktnwux98ukg0q135xy4uonwlkyshjg61xas6aimvj86cdjno5tdqx2anaf1rotjih4vimrx4foz45c3u0gt1qebllb58atsdq3fv3p6jattt8wu2kuohxxtoeu9lhbzmo4sou19mktzw8dyixohnlk8gx39miwfrw15kmkh1xuku55v7gwoqj76wp6i9ez5a2nxi6pkkgo50vxyvqks08liduv862k0kdozu736r15fzii9xtw1zv04kkyja3gqp1kh9oq9h5z7ggi9305bfut0nv3s5ntft5wk3v660ehfifsblm1kjcps4t0fduq27ovvmodt50bz37bfpameug5xnvs5cchhui5055k53fa5z7lj8smktpaevaj6ud1jq3tz8p3j8acf20zernlbwz53jvaythu4d8gn1nl6ines7nxbsshrsyvwnmmfiw1fbw4elz8mn7siglzsxpg70q5d1jzy9uo17qa789axm27f42vemsnkbjn2sopv7v2i9u2ya21erywuenbk06keiukvlcd86w7efskmwyceuuzjpa8ycstvkt5c4cdbdwie760koteqhye8jwc862aoc2nlzt15qiem69gqlzdzdyza8umv1pvz389jb45vehwxwayzd6jozt8ym32t1pbrmjsdko7gd4m4xtewlht0i9ij28bwxtlrob9b3coca2eizew1nkbkjuwrtiw4q1marr6v0yzd1fsvr5eck7s6dk45mp5ixmpj8w5k9n6cxib8pnzovl2cljbzo2qelo4ykd5vt0m9tmdmdxdva50or6gl31ugx4i1yi99u6u8zezl2fh75fdemaxyj24dfd2j236fg82b5xwwr8k2w58kdsm5nocj3rhld0xdm970587dkeqescnj7oqvzpykn44by3trsm3ztswr0sa3igif0lwqwot9ny3stn351y4bm6bc2n6dij6rvvvi5eixjxjo1k4xwx1i2dbscmznmm6dywnec0zlji8liqw1a4cllyr7byb4m2jnt4l4o0mdayclpq55h5vu7z7skgpi95i7qqnn2komctcutaqmcf94jvnf2kg93jzlpvdtg142grracowl107dhwsmoujiqybs64ziif4fqy56ye1xpgrint6r97x2hbjoedbbo7bu834j6en89b5hsjquw38f1yh2elw32hzkl7xqmdqybx26z25rsyk57sel9056s36z5cbs7wmzdhy550rxisyd2g8ds1hcry9m73xcn0t6bhzueucqlzk8wngby4w73u87v6tbbplzeso8ksye5z7lnw3e8rqii3hlbrgou0rp0vmcevwaq7tqaagvofe7sx5r6tcc4fts3e0lp6q1mg2f7oqkhhqp2wd9tte4zkmzs7s7ru778cru0foqc9sswaimeeycxvtce39yvcgb3r4inya22yacs6pmmwkr1xqi0j2gjxqidwgpw9hk6d7wwkd7n61zizaoilhpmh031y3ath2hufm8or3oz09ef0q6ismna30i2lbhcwqhkv02ssxlwoijle946h3dzfew6v2u1uha73s9f52zoy5vxs12nhq4wfzodsu9nq54uar0yjquyqqs48gommmlkm6duqywb03606evw5xyxfez08nm77i23wm70czxz6up4js6wxy35n7wpspn3piwqe56a5ozvdg987fwseoylthh2il2k0l0p0tj6zwv1ixodx7ak2ggbpf6ycnu943lusp0rlqd1grv31bpvid3aorrw54s6qkfo00fbuobjessb6fj3c8ocfx2u4urr49sdrefna4gypzn3yrdkmdds7zypoxb9yk12x6iyx09cg4trewjoj1ol5dl32ol1c0m2pxbwwa7tjxadwfdre11t81leix79x80xh240679n9pviaysbc1udgaw8ujkaq9ju0pgu1jd8nkmvsqqh44xdhtf5n2att115ff9y28yx62odlyjv9niqogz79v4ubo3v0ff233vfiiwob9tf4tajzr0vb2z57xvt98snncda0ppvpk4xu54d4n5u6ot5pslqc8lkc6joishlzd9z3g9v4vxe6pd2syas1frpisapn13sj94tt0e2zs0do3id43oaj3w5h8c3wagk96vkalz38xjnmc9i5p8ko0rzqoaotr5sxyq3don8xh2fiwrlkd4ntlrnge74qqamut1dob673x98izjnahq15npn0969qbhch1v1ikeuazv37a395q6orro5r1xbweemw19n713kbl68gvho6gziyqqduipdvg23bixf512mntj3816q2ic3mhbq2eqt3rf6rtvouqjh84zhkk50zyk87tyfkenm3d8j5t2agy6wxk76h35s54mocd2r7m1be6ik1feelxa0vdrztvilzf08aev6ee0q8q8pskaceq8v2jwe7phhw5wr6hfines8xc6dni4re7lisbol2gk00cvtl7rztn03zq0z69corswin5463hbjo8rd81bwq95xk0fqin6i9g76j3c3mrko367z2ff99goexesbk7hl62n5q7stxkkoe4ao66wgd6tlgexiutjdmf0enxc1yj0qdqw1qnrjbpk3how76bvhcbfsjrwoclhy2w2ozuymwhtmmpprtoya02m0kzqxjkrvfqj0otbbxfwt9niccduscqlqj5uze5tbr8tmo76xcqpsfxq0uxtebh9hqmhyoonpgnv79lf1l1mj31itoevljazofhfrvj9s7xyufsjrj92w5lxst73wxfavlqs3ts1u7gqweao4ycuv384ktpq3tebve7carsm3zidv2xxmy6qef482o60xsbcukjlggzif48ddxlr71rphxeks330t7fa234xyb4a1j5nqk4obdxv8ytxikkfaxi0s3tpsmq8etqw17pa04v1ghqrlvhj569nrz7u1uc2i22gaihio0rdiujio0yrsns29anj2ok7q214o72541q9stug4909qnaspuhp9uzq2u3ffyxx2py3joemx785ywp0kcasx4y7p37iiixfwmmxdqdikbeudtrdd8p3fxdqtto9ixez3dzsz0a2e3vlp998poqemaw2ly01t762vc2byi8dgvyanrkjko1bvrfr5r80e5antciul3wiyndfv9uzo3q49o8e7oxql9ahxz14ooo93don0c3m5uov6jq3r2z12uxh00485uaeuwfq2sn2dwjchwdwtrrrt5c3xm26gnh8yv6j54xh5ivv2hsdgjeblrgf9mx0a5fh84k5azmr7puub208w44rfjdowhyjrvifn0inlo1csuxou4ys0d8y6pqg3wfz8ekhmb4309gxrp9jop29n8gcc6h9mg7xu6188hnvgpdzxkiycbvmtfu9ujhcjifrl4g9u2pl0dxm68svo7uogg86p2jejipbf8lh6g1xahc2sjn6uzqd6c7hasf3tz86tc3e3s1z038ro2s3jzfa8g5au99jfzv2dr75ey3r1gva9gjikxkabtbyuqituy9tjc3lbiux4x453gszcyuje42ogu3dfioh98inlnjz4rclpvagy5kneut9myovl4t6vk9dwsrr78ich1xufimbnnuioic5v377hrsvxi22q4ol5e6ladc6vodzsam16mljpluvi6jyupyovdi89xva4mh7b2ciqsccbha5oc074npdwldx4lkl0vq9vp5baoo0p65up615xqoqevui80u87nmixyzypcvzpq1ek0bxj9jcxtbmpebznlr2x06wuk1whced5wtz1aby5fmjmao9ypdx9qwj5lz03al086bbl9aaz7xeggp44c7jm7hvu1s8g0b9dzztuqib48d14uginmis2lx35k1ii0as3gkq4iijm1afudqd65echlaurc7j6d9kgfw2btcl5x5epuf8fcnazk == \s\8\d\r\w\e\2\1\d\g\m\1\s\i\4\9\v\o\c\m\r\t\y\o\9\4\x\x\2\d\p\a\d\s\g\b\o\u\u\7\8\3\y\o\e\4\3\t\7\a\m\1\8\u\5\y\p\x\m\7\t\m\c\b\g\9\h\y\7\q\i\n\k\8\9\1\j\f\m\8\s\q\d\n\5\y\l\t\9\m\t\p\t\8\j\r\2\5\i\2\c\g\b\c\d\t\g\b\x\n\t\c\y\4\a\t\a\0\2\3\q\a\a\r\6\a\z\8\r\b\1\k\t\n\w\u\x\9\8\u\k\g\0\q\1\3\5\x\y\4\u\o\n\w\l\k\y\s\h\j\g\6\1\x\a\s\6\a\i\m\v\j\8\6\c\d\j\n\o\5\t\d\q\x\2\a\n\a\f\1\r\o\t\j\i\h\4\v\i\m\r\x\4\f\o\z\4\5\c\3\u\0\g\t\1\q\e\b\l\l\b\5\8\a\t\s\d\q\3\f\v\3\p\6\j\a\t\t\t\8\w\u\2\k\u\o\h\x\x\t\o\e\u\9\l\h\b\z\m\o\4\s\o\u\1\9\m\k\t\z\w\8\d\y\i\x\o\h\n\l\k\8\g\x\3\9\m\i\w\f\r\w\1\5\k\m\k\h\1\x\u\k\u\5\5\v\7\g\w\o\q\j\7\6\w\p\6\i\9\e\z\5\a\2\n\x\i\6\p\k\k\g\o\5\0\v\x\y\v\q\k\s\0\8\l\i\d\u\v\8\6\2\k\0\k\d\o\z\u\7\3\6\r\1\5\f\z\i\i\9\x\t\w\1\z\v\0\4\k\k\y\j\a\3\g\q\p\1\k\h\9\o\q\9\h\5\z\7\g\g\i\9\3\0\5\b\f\u\t\0\n\v\3\s\5\n\t\f\t\5\w\k\3\v\6\6\0\e\h\f\i\f\s\b\l\m\1\k\j\c\p\s\4\t\0\f\d\u\q\2\7\o\v\v\m\o\d\t\5\0\b\z\3\7\b\f\p\a\m\e\u\g\5\x\n\v\s\5\c\c\h\h\u\i\5\0\5\5\k\5\3\f\a\5\z\7\l\j\8\s\m\k\t\p\a\e\v\a\j\6\u\d\1\j\q\3\t\z\8\p\3\j\8\a\c\f\2\0\z\e\r\n\l\b\w\z\5\3\j\v\a\y\t\h\u\4\d\8\g\n\1\n\l\6\i\n\e\s\7\n\x\b\s\s\h\r\s\y\v\w\n\m\m\f\i\w\1\f\b\w\4\e\l\z\8\m\n\7\s\i\g\l\z\s\x\p\g\7\0\q\5\d\1\j\z\y\9\u\o\1\7\q\a\7\8\9\a\x\m\2\7\f\4\2\v\e\m\s\n\k\b\j\n\2\s\o\p\v\7\v\2\i\9\u\2\y\a\2\1\e\r\y\w\u\e\n\b\k\0\6\k\e\i\u\k\v\l\c\d\8\6\w\7\e\f\s\k\m\w\y\c\e\u\u\z\j\p\a\8\y\c\s\t\v\k\t\5\c\4\c\d\b\d\w\i\e\7\6\0\k\o\t\e\q\h\y\e\8\j\w\c\8\6\2\a\o\c\2\n\l\z\t\1\5\q\i\e\m\6\9\g\q\l\z\d\z\d\y\z\a\8\u\m\v\1\p\v\z\3\8\9\j\b\4\5\v\e\h\w\x\w\a\y\z\d\6\j\o\z\t\8\y\m\3\2\t\1\p\b\r\m\j\s\d\k\o\7\g\d\4\m\4\x\t\e\w\l\h\t\0\i\9\i\j\2\8\b\w\x\t\l\r\o\b\9\b\3\c\o\c\a\2\e\i\z\e\w\1\n\k\b\k\j\u\w\r\t\i\w\4\q\1\m\a\r\r\6\v\0\y\z\d\1\f\s\v\r\5\e\c\k\7\s\6\d\k\4\5\m\p\5\i\x\m\p\j\8\w\5\k\9\n\6\c\x\i\b\8\p\n\z\o\v\l\2\c\l\j\b\z\o\2\q\e\l\o\4\y\k\d\5\v\t\0\m\9\t\m\d\m\d\x\d\v\a\5\0\o\r\6\g\l\3\1\u\g\x\4\i\1\y\i\9\9\u\6\u\8\z\e\z\l\2\f\h\7\5\f\d\e\m\a\x\y\j\2\4\d\f\d\2\j\2\3\6\f\g\8\2\b\5\x\w\w\r\8\k\2\w\5\8\k\d\s\m\5\n\o\c\j\3\r\h\l\d\0\x\d\m\9\7\0\5\8\7\d\k\e\q\e\s\c\n\j\7\o\q\v\z\p\y\k\n\4\4\b\y\3\t\r\s\m\3\z\t\s\w\r\0\s\a\3\i\g\i\f\0\l\w\q\w\o\t\9\n\y\3\s\t\n\3\5\1\y\4\b\m\6\b\c\2\n\6\d\i\j\6\r\v\v\v\i\5\e\i\x\j\x\j\o\1\k\4\x\w\x\1\i\2\d\b\s\c\m\z\n\m\m\6\d\y\w\n\e\c\0\z\l\j\i\8\l\i\q\w\1\a\4\c\l\l\y\r\7\b\y\b\4\m\2\j\n\t\4\l\4\o\0\m\d\a\y\c\l\p\q\5\5\h\5\v\u\7\z\7\s\k\g\p\i\9\5\i\7\q\q\n\n\2\k\o\m\c\t\c\u\t\a\q\m\c\f\9\4\j\v\n\f\2\k\g\9\3\j\z\l\p\v\d\t\g\1\4\2\g\r\r\a\c\o\w\l\1\0\7\d\h\w\s\m\o\u\j\i\q\y\b\s\6\4\z\i\i\f\4\f\q\y\5\6\y\e\1\x\p\g\r\i\n\t\6\r\9\7\x\2\h\b\j\o\e\d\b\b\o\7\b\u\8\3\4\j\6\e\n\8\9\b\5\h\s\j\q\u\w\3\8\f\1\y\h\2\e\l\w\3\2\h\z\k\l\7\x\q\m\d\q\y\b\x\2\6\z\2\5\r\s\y\k\5\7\s\e\l\9\0\5\6\s\3\6\z\5\c\b\s\7\w\m\z\d\h\y\5\5\0\r\x\i\s\y\d\2\g\8\d\s\1\h\c\r\y\9\m\7\3\x\c\n\0\t\6\b\h\z\u\e\u\c\q\l\z\k\8\w\n\g\b\y\4\w\7\3\u\8\7\v\6\t\b\b\p\l\z\e\s\o\8\k\s\y\e\5\z\7\l\n\w\3\e\8\r\q\i\i\3\h\l\b\r\g\o\u\0\r\p\0\v\m\c\e\v\w\a\q\7\t\q\a\a\g\v\o\f\e\7\s\x\5\r\6\t\c\c\4\f\t\s\3\e\0\l\p\6\q\1\m\g\2\f\7\o\q\k\h\h\q\p\2\w\d\9\t\t\e\4\z\k\m\z\s\7\s\7\r\u\7\7\8\c\r\u\0\f\o\q\c\9\s\s\w\a\i\m\e\e\y\c\x\v\t\c\e\3\9\y\v\c\g\b\3\r\4\i\n\y\a\2\2\y\a\c\s\6\p\m\m\w\k\r\1\x\q\i\0\j\2\g\j\x\q\i\d\w\g\p\w\9\h\k\6\d\7\w\w\k\d\7\n\6\1\z\i\z\a\o\i\l\h\p\m\h\0\3\1\y\3\a\t\h\2\h\u\f\m\8\o\r\3\o\z\0\9\e\f\0\q\6\i\s\m\n\a\3\0\i\2\l\b\h\c\w\q\h\k\v\0\2\s\s\x\l\w\o\i\j\l\e\9\4\6\h\3\d\z\f\e\w\6\v\2\u\1\u\h\a\7\3\s\9\f\5\2\z\o\y\5\v\x\s\1\2\n\h\q\4\w\f\z\o\d\s\u\9\n\q\5\4\u\a\r\0\y\j\q\u\y\q\q\s\4\8\g\o\m\m\m\l\k\m\6\d\u\q\y\w\b\0\3\6\0\6\e\v\w\5\x\y\x\f\e\z\0\8\n\m\7\7\i\2\3\w\m\7\0\c\z\x\z\6\u\p\4\j\s\6\w\x\y\3\5\n\7\w\p\s\p\n\3\p\i\w\q\e\5\6\a\5\o\z\v\d\g\9\8\7\f\w\s\e\o\y\l\t\h\h\2\i\l\2\k\0\l\0\p\0\t\j\6\z\w\v\1\i\x\o\d\x\7\a\k\2\g\g\b\p\f\6\y\c\n\u\9\4\3\l\u\s\p\0\r\l\q\d\1\g\r\v\3\1\b\p\v\i\d\3\a\o\r\r\w\5\4\s\6\q\k\f\o\0\0\f\b\u\o\b\j\e\s\s\b\6\f\j\3\c\8\o\c\f\x\2\u\4\u\r\r\4\9\s\d\r\e\f\n\a\4\g\y\p\z\n\3\y\r\d\k\m\d\d\s\7\z\y\p\o\x\b\9\y\k\1\2\x\6\i\y\x\0\9\c\g\4\t\r\e\w\j\o\j\1\o\l\5\d\l\3\2\o\l\1\c\0\m\2\p\x\b\w\w\a\7\t\j\x\a\d\w\f\d\r\e\1\1\t\8\1\l\e\i\x\7\9\x\8\0\x\h\2\4\0\6\7\9\n\9\p\v\i\a\y\s\b\c\1\u\d\g\a\w\8\u\j\k\a\q\9\j\u\0\p\g\u\1\j\d\8\n\k\m\v\s\q\q\h\4\4\x\d\h\t\f\5\n\2\a\t\t\1\1\5\f\f\9\y\2\8\y\x\6\2\o\d\l\y\j\v\9\n\i\q\o\g\z\7\9\v\4\u\b\o\3\v\0\f\f\2\3\3\v\f\i\i\w\o\b\9\t\f\4\t\a\j\z\r\0\v\b\2\z\5\7\x\v\t\9\8\s\n\n\c\d\a\0\p\p\v\p\k\4\x\u\5\4\d\4\n\5\u\6\o\t\5\p\s\l\q\c\8\l\k\c\6\j\o\i\s\h\l\z\d\9\z\3\g\9\v\4\v\x\e\6\p\d\2\s\y\a\s\1\f\r\p\i\s\a\p\n\1\3\s\j\9\4\t\t\0\e\2\z\s\0\d\o\3\i\d\4\3\o\a\j\3\w\5\h\8\c\3\w\a\g\k\9\6\v\k\a\l\z\3\8\x\j\n\m\c\9\i\5\p\8\k\o\0\r\z\q\o\a\o\t\r\5\s\x\y\q\3\d\o\n\8\x\h\2\f\i\w\r\l\k\d\4\n\t\l\r\n\g\e\7\4\q\q\a\m\u\t\1\d\o\b\6\7\3\x\9\8\i\z\j\n\a\h\q\1\5\n\p\n\0\9\6\9\q\b\h\c\h\1\v\1\i\k\e\u\a\z\v\3\7\a\3\9\5\q\6\o\r\r\o\5\r\1\x\b\w\e\e\m\w\1\9\n\7\1\3\k\b\l\6\8\g\v\h\o\6\g\z\i\y\q\q\d\u\i\p\d\v\g\2\3\b\i\x\f\5\1\2\m\n\t\j\3\8\1\6\q\2\i\c\3\m\h\b\q\2\e\q\t\3\r\f\6\r\t\v\o\u\q\j\h\8\4\z\h\k\k\5\0\z\y\k\8\7\t\y\f\k\e\n\m\3\d\8\j\5\t\2\a\g\y\6\w\x\k\7\6\h\3\5\s\5\4\m\o\c\d\2\r\7\m\1\b\e\6\i\k\1\f\e\e\l\x\a\0\v\d\r\z\t\v\i\l\z\f\0\8\a\e\v\6\e\e\0\q\8\q\8\p\s\k\a\c\e\q\8\v\2\j\w\e\7\p\h\h\w\5\w\r\6\h\f\i\n\e\s\8\x\c\6\d\n\i\4\r\e\7\l\i\s\b\o\l\2\g\k\0\0\c\v\t\l\7\r\z\t\n\0\3\z\q\0\z\6\9\c\o\r\s\w\i\n\5\4\6\3\h\b\j\o\8\r\d\8\1\b\w\q\9\5\x\k\0\f\q\i\n\6\i\9\g\7\6\j\3\c\3\m\r\k\o\3\6\7\z\2\f\f\9\9\g\o\e\x\e\s\b\k\7\h\l\6\2\n\5\q\7\s\t\x\k\k\o\e\4\a\o\6\6\w\g\d\6\t\l\g\e\x\i\u\t\j\d\m\f\0\e\n\x\c\1\y\j\0\q\d\q\w\1\q\n\r\j\b\p\k\3\h\o\w\7\6\b\v\h\c\b\f\s\j\r\w\o\c\l\h\y\2\w\2\o\z\u\y\m\w\h\t\m\m\p\p\r\t\o\y\a\0\2\m\0\k\z\q\x\j\k\r\v\f\q\j\0\o\t\b\b\x\f\w\t\9\n\i\c\c\d\u\s\c\q\l\q\j\5\u\z\e\5\t\b\r\8\t\m\o\7\6\x\c\q\p\s\f\x\q\0\u\x\t\e\b\h\9\h\q\m\h\y\o\o\n\p\g\n\v\7\9\l\f\1\l\1\m\j\3\1\i\t\o\e\v\l\j\a\z\o\f\h\f\r\v\j\9\s\7\x\y\u\f\s\j\r\j\9\2\w\5\l\x\s\t\7\3\w\x\f\a\v\l\q\s\3\t\s\1\u\7\g\q\w\e\a\o\4\y\c\u\v\3\8\4\k\t\p\q\3\t\e\b\v\e\7\c\a\r\s\m\3\z\i\d\v\2\x\x\m\y\6\q\e\f\4\8\2\o\6\0\x\s\b\c\u\k\j\l\g\g\z\i\f\4\8\d\d\x\l\r\7\1\r\p\h\x\e\k\s\3\3\0\t\7\f\a\2\3\4\x\y\b\4\a\1\j\5\n\q\k\4\o\b\d\x\v\8\y\t\x\i\k\k\f\a\x\i\0\s\3\t\p\s\m\q\8\e\t\q\w\1\7\p\a\0\4\v\1\g\h\q\r\l\v\h\j\5\6\9\n\r\z\7\u\1\u\c\2\i\2\2\g\a\i\h\i\o\0\r\d\i\u\j\i\o\0\y\r\s\n\s\2\9\a\n\j\2\o\k\7\q\2\1\4\o\7\2\5\4\1\q\9\s\t\u\g\4\9\0\9\q\n\a\s\p\u\h\p\9\u\z\q\2\u\3\f\f\y\x\x\2\p\y\3\j\o\e\m\x\7\8\5\y\w\p\0\k\c\a\s\x\4\y\7\p\3\7\i\i\i\x\f\w\m\m\x\d\q\d\i\k\b\e\u\d\t\r\d\d\8\p\3\f\x\d\q\t\t\o\9\i\x\e\z\3\d\z\s\z\0\a\2\e\3\v\l\p\9\9\8\p\o\q\e\m\a\w\2\l\y\0\1\t\7\6\2\v\c\2\b\y\i\8\d\g\v\y\a\n\r\k\j\k\o\1\b\v\r\f\r\5\r\8\0\e\5\a\n\t\c\i\u\l\3\w\i\y\n\d\f\v\9\u\z\o\3\q\4\9\o\8\e\7\o\x\q\l\9\a\h\x\z\1\4\o\o\o\9\3\d\o\n\0\c\3\m\5\u\o\v\6\j\q\3\r\2\z\1\2\u\x\h\0\0\4\8\5\u\a\e\u\w\f\q\2\s\n\2\d\w\j\c\h\w\d\w\t\r\r\r\t\5\c\3\x\m\2\6\g\n\h\8\y\v\6\j\5\4\x\h\5\i\v\v\2\h\s\d\g\j\e\b\l\r\g\f\9\m\x\0\a\5\f\h\8\4\k\5\a\z\m\r\7\p\u\u\b\2\0\8\w\4\4\r\f\j\d\o\w\h\y\j\r\v\i\f\n\0\i\n\l\o\1\c\s\u\x\o\u\4\y\s\0\d\8\y\6\p\q\g\3\w\f\z\8\e\k\h\m\b\4\3\0\9\g\x\r\p\9\j\o\p\2\9\n\8\g\c\c\6\h\9\m\g\7\x\u\6\1\8\8\h\n\v\g\p\d\z\x\k\i\y\c\b\v\m\t\f\u\9\u\j\h\c\j\i\f\r\l\4\g\9\u\2\p\l\0\d\x\m\6\8\s\v\o\7\u\o\g\g\8\6\p\2\j\e\j\i\p\b\f\8\l\h\6\g\1\x\a\h\c\2\s\j\n\6\u\z\q\d\6\c\7\h\a\s\f\3\t\z\8\6\t\c\3\e\3\s\1\z\0\3\8\r\o\2\s\3\j\z\f\a\8\g\5\a\u\9\9\j\f\z\v\2\d\r\7\5\e\y\3\r\1\g\v\a\9\g\j\i\k\x\k\a\b\t\b\y\u\q\i\t\u\y\9\t\j\c\3\l\b\i\u\x\4\x\4\5\3\g\s\z\c\y\u\j\e\4\2\o\g\u\3\d\f\i\o\h\9\8\i\n\l\n\j\z\4\r\c\l\p\v\a\g\y\5\k\n\e\u\t\9\m\y\o\v\l\4\t\6\v\k\9\d\w\s\r\r\7\8\i\c\h\1\x\u\f\i\m\b\n\n\u\i\o\i\c\5\v\3\7\7\h\r\s\v\x\i\2\2\q\4\o\l\5\e\6\l\a\d\c\6\v\o\d\z\s\a\m\1\6\m\l\j\p\l\u\v\i\6\j\y\u\p\y\o\v\d\i\8\9\x\v\a\4\m\h\7\b\2\c\i\q\s\c\c\b\h\a\5\o\c\0\7\4\n\p\d\w\l\d\x\4\l\k\l\0\v\q\9\v\p\5\b\a\o\o\0\p\6\5\u\p\6\1\5\x\q\o\q\e\v\u\i\8\0\u\8\7\n\m\i\x\y\z\y\p\c\v\z\p\q\1\e\k\0\b\x\j\9\j\c\x\t\b\m\p\e\b\z\n\l\r\2\x\0\6\w\u\k\1\w\h\c\e\d\5\w\t\z\1\a\b\y\5\f\m\j\m\a\o\9\y\p\d\x\9\q\w\j\5\l\z\0\3\a\l\0\8\6\b\b\l\9\a\a\z\7\x\e\g\g\p\4\4\c\7\j\m\7\h\v\u\1\s\8\g\0\b\9\d\z\z\t\u\q\i\b\4\8\d\1\4\u\g\i\n\m\i\s\2\l\x\3\5\k\1\i\i\0\a\s\3\g\k\q\4\i\i\j\m\1\a\f\u\d\q\d\6\5\e\c\h\l\a\u\r\c\7\j\6\d\9\k\g\f\w\2\b\t\c\l\5\x\5\e\p\u\f\8\f\c\n\a\z\k ]] 00:40:00.368 00:40:00.368 real 0m4.138s 00:40:00.368 user 0m3.480s 00:40:00.368 sys 0m0.503s 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:00.368 ************************************ 00:40:00.368 END TEST dd_rw_offset 00:40:00.368 ************************************ 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:00.368 19:07:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:00.368 { 00:40:00.368 "subsystems": [ 00:40:00.368 { 00:40:00.368 "subsystem": "bdev", 00:40:00.368 "config": [ 00:40:00.368 { 00:40:00.368 "params": { 00:40:00.368 "trtype": "pcie", 00:40:00.368 "traddr": "0000:00:10.0", 00:40:00.368 "name": "Nvme0" 00:40:00.368 }, 00:40:00.368 "method": "bdev_nvme_attach_controller" 00:40:00.368 }, 00:40:00.368 { 00:40:00.368 "method": "bdev_wait_for_examine" 00:40:00.368 } 00:40:00.368 ] 00:40:00.368 } 00:40:00.368 ] 00:40:00.368 } 00:40:00.368 [2024-07-25 19:07:00.902590] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:00.368 [2024-07-25 19:07:00.902970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164872 ] 00:40:00.626 [2024-07-25 19:07:01.085872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.884 [2024-07-25 19:07:01.288729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.516  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:02.516 00:40:02.516 19:07:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:02.516 ************************************ 00:40:02.516 END TEST spdk_dd_basic_rw 00:40:02.516 ************************************ 00:40:02.516 00:40:02.516 real 0m51.406s 00:40:02.516 user 0m42.240s 00:40:02.516 sys 0m7.499s 00:40:02.516 19:07:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:02.516 19:07:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:02.517 19:07:02 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:40:02.517 19:07:02 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:02.517 19:07:02 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:02.517 19:07:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:02.517 ************************************ 00:40:02.517 START TEST spdk_dd_posix 00:40:02.517 ************************************ 00:40:02.517 19:07:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:40:02.776 * Looking for test storage... 00:40:02.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:40:02.776 * First test run, using AIO 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:02.776 ************************************ 00:40:02.776 START TEST dd_flag_append 00:40:02.776 ************************************ 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:40:02.776 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ma6do40hmw2o8qbsoxm57063is7gq0o2 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=96q8v8n19a6ehnlahesnnkkhjqx2av1f 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ma6do40hmw2o8qbsoxm57063is7gq0o2 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 96q8v8n19a6ehnlahesnnkkhjqx2av1f 00:40:02.777 19:07:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:40:02.777 [2024-07-25 19:07:03.218949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:02.777 [2024-07-25 19:07:03.219192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164955 ] 00:40:03.034 [2024-07-25 19:07:03.402136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.291 [2024-07-25 19:07:03.695981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.923  Copying: 32/32 [B] (average 31 kBps) 00:40:04.923 00:40:04.923 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 96q8v8n19a6ehnlahesnnkkhjqx2av1fma6do40hmw2o8qbsoxm57063is7gq0o2 == \9\6\q\8\v\8\n\1\9\a\6\e\h\n\l\a\h\e\s\n\n\k\k\h\j\q\x\2\a\v\1\f\m\a\6\d\o\4\0\h\m\w\2\o\8\q\b\s\o\x\m\5\7\0\6\3\i\s\7\g\q\0\o\2 ]] 00:40:04.923 00:40:04.923 real 0m2.334s 00:40:04.923 user 0m1.897s 00:40:04.923 sys 0m0.308s 00:40:04.923 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:04.923 ************************************ 00:40:04.923 END TEST dd_flag_append 00:40:04.923 ************************************ 00:40:04.923 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:05.183 ************************************ 00:40:05.183 START TEST dd_flag_directory 00:40:05.183 ************************************ 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:05.183 19:07:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:05.183 [2024-07-25 19:07:05.619409] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:05.183 [2024-07-25 19:07:05.619668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165009 ] 00:40:05.455 [2024-07-25 19:07:05.802862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.755 [2024-07-25 19:07:06.101914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.040 [2024-07-25 19:07:06.437395] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:06.040 [2024-07-25 19:07:06.437488] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:06.040 [2024-07-25 19:07:06.437523] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:06.979 [2024-07-25 19:07:07.212314] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:07.238 19:07:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:07.238 [2024-07-25 19:07:07.673144] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:07.238 [2024-07-25 19:07:07.673294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165036 ] 00:40:07.498 [2024-07-25 19:07:07.828158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.498 [2024-07-25 19:07:08.023518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.067 [2024-07-25 19:07:08.342156] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:08.067 [2024-07-25 19:07:08.342240] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:08.067 [2024-07-25 19:07:08.342272] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:08.635 [2024-07-25 19:07:09.115277] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:09.204 00:40:09.204 real 0m3.998s 00:40:09.204 user 0m3.301s 00:40:09.204 sys 0m0.495s 00:40:09.204 ************************************ 00:40:09.204 END TEST dd_flag_directory 00:40:09.204 ************************************ 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:09.204 ************************************ 00:40:09.204 START TEST dd_flag_nofollow 00:40:09.204 ************************************ 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:09.204 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:09.205 19:07:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:09.205 [2024-07-25 19:07:09.689180] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:09.205 [2024-07-25 19:07:09.689419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165081 ] 00:40:09.464 [2024-07-25 19:07:09.870197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:09.723 [2024-07-25 19:07:10.077150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.982 [2024-07-25 19:07:10.395500] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:09.982 [2024-07-25 19:07:10.395585] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:09.982 [2024-07-25 19:07:10.395621] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:10.919 [2024-07-25 19:07:11.170523] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:11.178 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:40:11.178 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:11.178 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:11.179 19:07:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:11.179 [2024-07-25 19:07:11.732897] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:11.179 [2024-07-25 19:07:11.733124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165117 ] 00:40:11.438 [2024-07-25 19:07:11.914551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.697 [2024-07-25 19:07:12.159081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.265 [2024-07-25 19:07:12.538042] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:12.265 [2024-07-25 19:07:12.538140] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:12.265 [2024-07-25 19:07:12.538178] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:12.833 [2024-07-25 19:07:13.372236] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:40:13.400 19:07:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:13.400 [2024-07-25 19:07:13.939043] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:13.400 [2024-07-25 19:07:13.939722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165143 ] 00:40:13.658 [2024-07-25 19:07:14.118354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.917 [2024-07-25 19:07:14.350615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.551  Copying: 512/512 [B] (average 500 kBps) 00:40:15.551 00:40:15.551 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 4ik6jqg51yakkc29i1nejv7mz4x6ew6qwyxhalhp0nf54j4of5am13ywnyg8g4xg0075ksvxz26qr3ifkej59bugcn5jmivisiw29013ivw7mvr4sx45w6t1vf4ji18r5gvuvtoiqdipmi0uyw14hx1k744e24gfux5kmnddg8vfouzk9u7b91makbn518q0mtdohh68vgy8pulpjv8aydew2v4wumvo93c4vc4ij46g8wio833zcd31vnq0ruhsw1b1mnz9mwki1y2csa4yu79jkg1ts5278w93viu32u4yqs2977ony056u1kc810nvvpqodlxs4d7cfs1vaowqpyaksh27rxg3da5t94pb1up73c9ffszfgojjt7qw5yj3z6lzn4v7bbn37bf5jfp899bbu0mwshhy59mlpg3hnjy74v9lx8cd75b5jgkqo38aw9qfa40003uwziajw69op6r8haqa6i3umwb2ms0munn4ombmwhnkb6aljc87iqx == \4\i\k\6\j\q\g\5\1\y\a\k\k\c\2\9\i\1\n\e\j\v\7\m\z\4\x\6\e\w\6\q\w\y\x\h\a\l\h\p\0\n\f\5\4\j\4\o\f\5\a\m\1\3\y\w\n\y\g\8\g\4\x\g\0\0\7\5\k\s\v\x\z\2\6\q\r\3\i\f\k\e\j\5\9\b\u\g\c\n\5\j\m\i\v\i\s\i\w\2\9\0\1\3\i\v\w\7\m\v\r\4\s\x\4\5\w\6\t\1\v\f\4\j\i\1\8\r\5\g\v\u\v\t\o\i\q\d\i\p\m\i\0\u\y\w\1\4\h\x\1\k\7\4\4\e\2\4\g\f\u\x\5\k\m\n\d\d\g\8\v\f\o\u\z\k\9\u\7\b\9\1\m\a\k\b\n\5\1\8\q\0\m\t\d\o\h\h\6\8\v\g\y\8\p\u\l\p\j\v\8\a\y\d\e\w\2\v\4\w\u\m\v\o\9\3\c\4\v\c\4\i\j\4\6\g\8\w\i\o\8\3\3\z\c\d\3\1\v\n\q\0\r\u\h\s\w\1\b\1\m\n\z\9\m\w\k\i\1\y\2\c\s\a\4\y\u\7\9\j\k\g\1\t\s\5\2\7\8\w\9\3\v\i\u\3\2\u\4\y\q\s\2\9\7\7\o\n\y\0\5\6\u\1\k\c\8\1\0\n\v\v\p\q\o\d\l\x\s\4\d\7\c\f\s\1\v\a\o\w\q\p\y\a\k\s\h\2\7\r\x\g\3\d\a\5\t\9\4\p\b\1\u\p\7\3\c\9\f\f\s\z\f\g\o\j\j\t\7\q\w\5\y\j\3\z\6\l\z\n\4\v\7\b\b\n\3\7\b\f\5\j\f\p\8\9\9\b\b\u\0\m\w\s\h\h\y\5\9\m\l\p\g\3\h\n\j\y\7\4\v\9\l\x\8\c\d\7\5\b\5\j\g\k\q\o\3\8\a\w\9\q\f\a\4\0\0\0\3\u\w\z\i\a\j\w\6\9\o\p\6\r\8\h\a\q\a\6\i\3\u\m\w\b\2\m\s\0\m\u\n\n\4\o\m\b\m\w\h\n\k\b\6\a\l\j\c\8\7\i\q\x ]] 00:40:15.551 00:40:15.551 real 0m6.491s 00:40:15.551 user 0m5.275s 00:40:15.551 sys 0m0.883s 00:40:15.551 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:15.551 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:40:15.551 ************************************ 00:40:15.551 END TEST dd_flag_nofollow 00:40:15.551 ************************************ 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:15.810 ************************************ 00:40:15.810 START TEST dd_flag_noatime 00:40:15.810 ************************************ 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721934434 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721934436 00:40:15.810 19:07:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:40:16.747 19:07:17 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:16.747 [2024-07-25 19:07:17.267992] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:16.747 [2024-07-25 19:07:17.269032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165215 ] 00:40:17.005 [2024-07-25 19:07:17.451285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.264 [2024-07-25 19:07:17.696562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.899  Copying: 512/512 [B] (average 500 kBps) 00:40:18.899 00:40:18.899 19:07:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:18.899 19:07:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721934434 )) 00:40:18.899 19:07:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:18.899 19:07:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721934436 )) 00:40:18.899 19:07:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:19.158 [2024-07-25 19:07:19.540100] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:19.158 [2024-07-25 19:07:19.540901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165246 ] 00:40:19.158 [2024-07-25 19:07:19.722845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.417 [2024-07-25 19:07:19.967490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.364  Copying: 512/512 [B] (average 500 kBps) 00:40:21.364 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721934440 )) 00:40:21.364 00:40:21.364 real 0m5.572s 00:40:21.364 user 0m3.599s 00:40:21.364 sys 0m0.702s 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:21.364 ************************************ 00:40:21.364 END TEST dd_flag_noatime 00:40:21.364 ************************************ 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:21.364 ************************************ 00:40:21.364 START TEST dd_flags_misc 00:40:21.364 ************************************ 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:21.364 19:07:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:21.364 [2024-07-25 19:07:21.892840] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:21.364 [2024-07-25 19:07:21.893597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165293 ] 00:40:21.623 [2024-07-25 19:07:22.075326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:21.883 [2024-07-25 19:07:22.333492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.518  Copying: 512/512 [B] (average 500 kBps) 00:40:23.518 00:40:23.777 19:07:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tx47r41j42yga2739bbp66g8o2kpbk5xcbv3qx792j7osje6q3twr41kcwj6o831g0cfu1xi151kd9hnn42u1u5g665hz6tpns5gvk412l1n7p4cpgy3utfc4tzzt9mdgi6k3hjzfxdrgw33y35j0ry115dsbugaig9b4p6gknamg9b4rf5ecaakrpofbykngzgx9diru4otw61zbx5yjtfxuqiplzfvv4rrv59wea077grxdpqkefuzb1dnw9r9xtgo81dv1emuh2ltcw5hy7t0qr9y3p1y74ewol9gws5ryfsh9sr4pn3q14ny4iqm93vf253ytgzwrwsf55q84wmedrfgdnehwjod0pjqi19voq6b6p4ktz543xte0ylru063jhtdijlhrvnbmjwzmzcdi37jk63ai1k05m78jna5rixiu5b5nylgya36dkh7cu2iwfvngj0t9ok2qejberkzzeljpzm81x644g2zjha19qw8he5bq5wjvn0zd6vj == \t\x\4\7\r\4\1\j\4\2\y\g\a\2\7\3\9\b\b\p\6\6\g\8\o\2\k\p\b\k\5\x\c\b\v\3\q\x\7\9\2\j\7\o\s\j\e\6\q\3\t\w\r\4\1\k\c\w\j\6\o\8\3\1\g\0\c\f\u\1\x\i\1\5\1\k\d\9\h\n\n\4\2\u\1\u\5\g\6\6\5\h\z\6\t\p\n\s\5\g\v\k\4\1\2\l\1\n\7\p\4\c\p\g\y\3\u\t\f\c\4\t\z\z\t\9\m\d\g\i\6\k\3\h\j\z\f\x\d\r\g\w\3\3\y\3\5\j\0\r\y\1\1\5\d\s\b\u\g\a\i\g\9\b\4\p\6\g\k\n\a\m\g\9\b\4\r\f\5\e\c\a\a\k\r\p\o\f\b\y\k\n\g\z\g\x\9\d\i\r\u\4\o\t\w\6\1\z\b\x\5\y\j\t\f\x\u\q\i\p\l\z\f\v\v\4\r\r\v\5\9\w\e\a\0\7\7\g\r\x\d\p\q\k\e\f\u\z\b\1\d\n\w\9\r\9\x\t\g\o\8\1\d\v\1\e\m\u\h\2\l\t\c\w\5\h\y\7\t\0\q\r\9\y\3\p\1\y\7\4\e\w\o\l\9\g\w\s\5\r\y\f\s\h\9\s\r\4\p\n\3\q\1\4\n\y\4\i\q\m\9\3\v\f\2\5\3\y\t\g\z\w\r\w\s\f\5\5\q\8\4\w\m\e\d\r\f\g\d\n\e\h\w\j\o\d\0\p\j\q\i\1\9\v\o\q\6\b\6\p\4\k\t\z\5\4\3\x\t\e\0\y\l\r\u\0\6\3\j\h\t\d\i\j\l\h\r\v\n\b\m\j\w\z\m\z\c\d\i\3\7\j\k\6\3\a\i\1\k\0\5\m\7\8\j\n\a\5\r\i\x\i\u\5\b\5\n\y\l\g\y\a\3\6\d\k\h\7\c\u\2\i\w\f\v\n\g\j\0\t\9\o\k\2\q\e\j\b\e\r\k\z\z\e\l\j\p\z\m\8\1\x\6\4\4\g\2\z\j\h\a\1\9\q\w\8\h\e\5\b\q\5\w\j\v\n\0\z\d\6\v\j ]] 00:40:23.777 19:07:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:23.777 19:07:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:23.777 [2024-07-25 19:07:24.182060] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:23.777 [2024-07-25 19:07:24.182311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165325 ] 00:40:24.035 [2024-07-25 19:07:24.365723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.294 [2024-07-25 19:07:24.623294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:25.930  Copying: 512/512 [B] (average 500 kBps) 00:40:25.930 00:40:25.930 19:07:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tx47r41j42yga2739bbp66g8o2kpbk5xcbv3qx792j7osje6q3twr41kcwj6o831g0cfu1xi151kd9hnn42u1u5g665hz6tpns5gvk412l1n7p4cpgy3utfc4tzzt9mdgi6k3hjzfxdrgw33y35j0ry115dsbugaig9b4p6gknamg9b4rf5ecaakrpofbykngzgx9diru4otw61zbx5yjtfxuqiplzfvv4rrv59wea077grxdpqkefuzb1dnw9r9xtgo81dv1emuh2ltcw5hy7t0qr9y3p1y74ewol9gws5ryfsh9sr4pn3q14ny4iqm93vf253ytgzwrwsf55q84wmedrfgdnehwjod0pjqi19voq6b6p4ktz543xte0ylru063jhtdijlhrvnbmjwzmzcdi37jk63ai1k05m78jna5rixiu5b5nylgya36dkh7cu2iwfvngj0t9ok2qejberkzzeljpzm81x644g2zjha19qw8he5bq5wjvn0zd6vj == \t\x\4\7\r\4\1\j\4\2\y\g\a\2\7\3\9\b\b\p\6\6\g\8\o\2\k\p\b\k\5\x\c\b\v\3\q\x\7\9\2\j\7\o\s\j\e\6\q\3\t\w\r\4\1\k\c\w\j\6\o\8\3\1\g\0\c\f\u\1\x\i\1\5\1\k\d\9\h\n\n\4\2\u\1\u\5\g\6\6\5\h\z\6\t\p\n\s\5\g\v\k\4\1\2\l\1\n\7\p\4\c\p\g\y\3\u\t\f\c\4\t\z\z\t\9\m\d\g\i\6\k\3\h\j\z\f\x\d\r\g\w\3\3\y\3\5\j\0\r\y\1\1\5\d\s\b\u\g\a\i\g\9\b\4\p\6\g\k\n\a\m\g\9\b\4\r\f\5\e\c\a\a\k\r\p\o\f\b\y\k\n\g\z\g\x\9\d\i\r\u\4\o\t\w\6\1\z\b\x\5\y\j\t\f\x\u\q\i\p\l\z\f\v\v\4\r\r\v\5\9\w\e\a\0\7\7\g\r\x\d\p\q\k\e\f\u\z\b\1\d\n\w\9\r\9\x\t\g\o\8\1\d\v\1\e\m\u\h\2\l\t\c\w\5\h\y\7\t\0\q\r\9\y\3\p\1\y\7\4\e\w\o\l\9\g\w\s\5\r\y\f\s\h\9\s\r\4\p\n\3\q\1\4\n\y\4\i\q\m\9\3\v\f\2\5\3\y\t\g\z\w\r\w\s\f\5\5\q\8\4\w\m\e\d\r\f\g\d\n\e\h\w\j\o\d\0\p\j\q\i\1\9\v\o\q\6\b\6\p\4\k\t\z\5\4\3\x\t\e\0\y\l\r\u\0\6\3\j\h\t\d\i\j\l\h\r\v\n\b\m\j\w\z\m\z\c\d\i\3\7\j\k\6\3\a\i\1\k\0\5\m\7\8\j\n\a\5\r\i\x\i\u\5\b\5\n\y\l\g\y\a\3\6\d\k\h\7\c\u\2\i\w\f\v\n\g\j\0\t\9\o\k\2\q\e\j\b\e\r\k\z\z\e\l\j\p\z\m\8\1\x\6\4\4\g\2\z\j\h\a\1\9\q\w\8\h\e\5\b\q\5\w\j\v\n\0\z\d\6\v\j ]] 00:40:25.930 19:07:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:25.930 19:07:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:25.930 [2024-07-25 19:07:26.448598] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:25.930 [2024-07-25 19:07:26.448804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165357 ] 00:40:26.190 [2024-07-25 19:07:26.629600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.449 [2024-07-25 19:07:26.823768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.088  Copying: 512/512 [B] (average 250 kBps) 00:40:28.088 00:40:28.088 19:07:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tx47r41j42yga2739bbp66g8o2kpbk5xcbv3qx792j7osje6q3twr41kcwj6o831g0cfu1xi151kd9hnn42u1u5g665hz6tpns5gvk412l1n7p4cpgy3utfc4tzzt9mdgi6k3hjzfxdrgw33y35j0ry115dsbugaig9b4p6gknamg9b4rf5ecaakrpofbykngzgx9diru4otw61zbx5yjtfxuqiplzfvv4rrv59wea077grxdpqkefuzb1dnw9r9xtgo81dv1emuh2ltcw5hy7t0qr9y3p1y74ewol9gws5ryfsh9sr4pn3q14ny4iqm93vf253ytgzwrwsf55q84wmedrfgdnehwjod0pjqi19voq6b6p4ktz543xte0ylru063jhtdijlhrvnbmjwzmzcdi37jk63ai1k05m78jna5rixiu5b5nylgya36dkh7cu2iwfvngj0t9ok2qejberkzzeljpzm81x644g2zjha19qw8he5bq5wjvn0zd6vj == \t\x\4\7\r\4\1\j\4\2\y\g\a\2\7\3\9\b\b\p\6\6\g\8\o\2\k\p\b\k\5\x\c\b\v\3\q\x\7\9\2\j\7\o\s\j\e\6\q\3\t\w\r\4\1\k\c\w\j\6\o\8\3\1\g\0\c\f\u\1\x\i\1\5\1\k\d\9\h\n\n\4\2\u\1\u\5\g\6\6\5\h\z\6\t\p\n\s\5\g\v\k\4\1\2\l\1\n\7\p\4\c\p\g\y\3\u\t\f\c\4\t\z\z\t\9\m\d\g\i\6\k\3\h\j\z\f\x\d\r\g\w\3\3\y\3\5\j\0\r\y\1\1\5\d\s\b\u\g\a\i\g\9\b\4\p\6\g\k\n\a\m\g\9\b\4\r\f\5\e\c\a\a\k\r\p\o\f\b\y\k\n\g\z\g\x\9\d\i\r\u\4\o\t\w\6\1\z\b\x\5\y\j\t\f\x\u\q\i\p\l\z\f\v\v\4\r\r\v\5\9\w\e\a\0\7\7\g\r\x\d\p\q\k\e\f\u\z\b\1\d\n\w\9\r\9\x\t\g\o\8\1\d\v\1\e\m\u\h\2\l\t\c\w\5\h\y\7\t\0\q\r\9\y\3\p\1\y\7\4\e\w\o\l\9\g\w\s\5\r\y\f\s\h\9\s\r\4\p\n\3\q\1\4\n\y\4\i\q\m\9\3\v\f\2\5\3\y\t\g\z\w\r\w\s\f\5\5\q\8\4\w\m\e\d\r\f\g\d\n\e\h\w\j\o\d\0\p\j\q\i\1\9\v\o\q\6\b\6\p\4\k\t\z\5\4\3\x\t\e\0\y\l\r\u\0\6\3\j\h\t\d\i\j\l\h\r\v\n\b\m\j\w\z\m\z\c\d\i\3\7\j\k\6\3\a\i\1\k\0\5\m\7\8\j\n\a\5\r\i\x\i\u\5\b\5\n\y\l\g\y\a\3\6\d\k\h\7\c\u\2\i\w\f\v\n\g\j\0\t\9\o\k\2\q\e\j\b\e\r\k\z\z\e\l\j\p\z\m\8\1\x\6\4\4\g\2\z\j\h\a\1\9\q\w\8\h\e\5\b\q\5\w\j\v\n\0\z\d\6\v\j ]] 00:40:28.088 19:07:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:28.088 19:07:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:28.088 [2024-07-25 19:07:28.462492] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:28.088 [2024-07-25 19:07:28.463368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165378 ] 00:40:28.088 [2024-07-25 19:07:28.641901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.347 [2024-07-25 19:07:28.837841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.986  Copying: 512/512 [B] (average 166 kBps) 00:40:29.986 00:40:29.986 19:07:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tx47r41j42yga2739bbp66g8o2kpbk5xcbv3qx792j7osje6q3twr41kcwj6o831g0cfu1xi151kd9hnn42u1u5g665hz6tpns5gvk412l1n7p4cpgy3utfc4tzzt9mdgi6k3hjzfxdrgw33y35j0ry115dsbugaig9b4p6gknamg9b4rf5ecaakrpofbykngzgx9diru4otw61zbx5yjtfxuqiplzfvv4rrv59wea077grxdpqkefuzb1dnw9r9xtgo81dv1emuh2ltcw5hy7t0qr9y3p1y74ewol9gws5ryfsh9sr4pn3q14ny4iqm93vf253ytgzwrwsf55q84wmedrfgdnehwjod0pjqi19voq6b6p4ktz543xte0ylru063jhtdijlhrvnbmjwzmzcdi37jk63ai1k05m78jna5rixiu5b5nylgya36dkh7cu2iwfvngj0t9ok2qejberkzzeljpzm81x644g2zjha19qw8he5bq5wjvn0zd6vj == \t\x\4\7\r\4\1\j\4\2\y\g\a\2\7\3\9\b\b\p\6\6\g\8\o\2\k\p\b\k\5\x\c\b\v\3\q\x\7\9\2\j\7\o\s\j\e\6\q\3\t\w\r\4\1\k\c\w\j\6\o\8\3\1\g\0\c\f\u\1\x\i\1\5\1\k\d\9\h\n\n\4\2\u\1\u\5\g\6\6\5\h\z\6\t\p\n\s\5\g\v\k\4\1\2\l\1\n\7\p\4\c\p\g\y\3\u\t\f\c\4\t\z\z\t\9\m\d\g\i\6\k\3\h\j\z\f\x\d\r\g\w\3\3\y\3\5\j\0\r\y\1\1\5\d\s\b\u\g\a\i\g\9\b\4\p\6\g\k\n\a\m\g\9\b\4\r\f\5\e\c\a\a\k\r\p\o\f\b\y\k\n\g\z\g\x\9\d\i\r\u\4\o\t\w\6\1\z\b\x\5\y\j\t\f\x\u\q\i\p\l\z\f\v\v\4\r\r\v\5\9\w\e\a\0\7\7\g\r\x\d\p\q\k\e\f\u\z\b\1\d\n\w\9\r\9\x\t\g\o\8\1\d\v\1\e\m\u\h\2\l\t\c\w\5\h\y\7\t\0\q\r\9\y\3\p\1\y\7\4\e\w\o\l\9\g\w\s\5\r\y\f\s\h\9\s\r\4\p\n\3\q\1\4\n\y\4\i\q\m\9\3\v\f\2\5\3\y\t\g\z\w\r\w\s\f\5\5\q\8\4\w\m\e\d\r\f\g\d\n\e\h\w\j\o\d\0\p\j\q\i\1\9\v\o\q\6\b\6\p\4\k\t\z\5\4\3\x\t\e\0\y\l\r\u\0\6\3\j\h\t\d\i\j\l\h\r\v\n\b\m\j\w\z\m\z\c\d\i\3\7\j\k\6\3\a\i\1\k\0\5\m\7\8\j\n\a\5\r\i\x\i\u\5\b\5\n\y\l\g\y\a\3\6\d\k\h\7\c\u\2\i\w\f\v\n\g\j\0\t\9\o\k\2\q\e\j\b\e\r\k\z\z\e\l\j\p\z\m\8\1\x\6\4\4\g\2\z\j\h\a\1\9\q\w\8\h\e\5\b\q\5\w\j\v\n\0\z\d\6\v\j ]] 00:40:29.986 19:07:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:29.986 19:07:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:40:29.986 19:07:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:40:29.986 19:07:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:29.986 19:07:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:29.986 19:07:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:29.986 [2024-07-25 19:07:30.506096] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:29.986 [2024-07-25 19:07:30.506857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165407 ] 00:40:30.245 [2024-07-25 19:07:30.689991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.504 [2024-07-25 19:07:30.889652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.142  Copying: 512/512 [B] (average 500 kBps) 00:40:32.142 00:40:32.142 19:07:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ey2khcizzld40428pya7rnoe73o5biicgds2o6rsmpsi3j0y62xtycbcrosvramgkdrbk3u0tez2q33ou49ohe5koimay4gm4ljp4xdt33kklhfypo46dlzg3ogmja9etc3z9twjoypbhykf33k9ou6fuygpv5d6344jrjlrvyi6c0d0wvzzzc8als2rjbwbk1e47zauewpbixu3m692ud5r9loqq20m3jbqenaid9eiszilldmjsrc0qy742q0zrhiuubpxkbe3x3f26qw6lz0d8bp2vzc15eyruhhnd4na1tqhpx9219gqenzppd0j1731s9r0xm33b4rvs3rnjowgxv7n7tsfbh9ixfjvulup8vs3ilpyqxsw4fq5eomc1sun86pi3fb8rnlwtb6qepdrtfi59cgmxikk1wrfjvbsug9wuv7sibvy6lg0vsfdvo9haw5ubku3kuwntgb76ireh4gly7hg7dk9o8yo7uj78iifm9k8xv6f9ze0fb3y == \e\y\2\k\h\c\i\z\z\l\d\4\0\4\2\8\p\y\a\7\r\n\o\e\7\3\o\5\b\i\i\c\g\d\s\2\o\6\r\s\m\p\s\i\3\j\0\y\6\2\x\t\y\c\b\c\r\o\s\v\r\a\m\g\k\d\r\b\k\3\u\0\t\e\z\2\q\3\3\o\u\4\9\o\h\e\5\k\o\i\m\a\y\4\g\m\4\l\j\p\4\x\d\t\3\3\k\k\l\h\f\y\p\o\4\6\d\l\z\g\3\o\g\m\j\a\9\e\t\c\3\z\9\t\w\j\o\y\p\b\h\y\k\f\3\3\k\9\o\u\6\f\u\y\g\p\v\5\d\6\3\4\4\j\r\j\l\r\v\y\i\6\c\0\d\0\w\v\z\z\z\c\8\a\l\s\2\r\j\b\w\b\k\1\e\4\7\z\a\u\e\w\p\b\i\x\u\3\m\6\9\2\u\d\5\r\9\l\o\q\q\2\0\m\3\j\b\q\e\n\a\i\d\9\e\i\s\z\i\l\l\d\m\j\s\r\c\0\q\y\7\4\2\q\0\z\r\h\i\u\u\b\p\x\k\b\e\3\x\3\f\2\6\q\w\6\l\z\0\d\8\b\p\2\v\z\c\1\5\e\y\r\u\h\h\n\d\4\n\a\1\t\q\h\p\x\9\2\1\9\g\q\e\n\z\p\p\d\0\j\1\7\3\1\s\9\r\0\x\m\3\3\b\4\r\v\s\3\r\n\j\o\w\g\x\v\7\n\7\t\s\f\b\h\9\i\x\f\j\v\u\l\u\p\8\v\s\3\i\l\p\y\q\x\s\w\4\f\q\5\e\o\m\c\1\s\u\n\8\6\p\i\3\f\b\8\r\n\l\w\t\b\6\q\e\p\d\r\t\f\i\5\9\c\g\m\x\i\k\k\1\w\r\f\j\v\b\s\u\g\9\w\u\v\7\s\i\b\v\y\6\l\g\0\v\s\f\d\v\o\9\h\a\w\5\u\b\k\u\3\k\u\w\n\t\g\b\7\6\i\r\e\h\4\g\l\y\7\h\g\7\d\k\9\o\8\y\o\7\u\j\7\8\i\i\f\m\9\k\8\x\v\6\f\9\z\e\0\f\b\3\y ]] 00:40:32.142 19:07:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:32.142 19:07:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:32.142 [2024-07-25 19:07:32.503578] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:32.142 [2024-07-25 19:07:32.503730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165434 ] 00:40:32.142 [2024-07-25 19:07:32.662188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.401 [2024-07-25 19:07:32.855212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.065  Copying: 512/512 [B] (average 500 kBps) 00:40:34.065 00:40:34.066 19:07:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ey2khcizzld40428pya7rnoe73o5biicgds2o6rsmpsi3j0y62xtycbcrosvramgkdrbk3u0tez2q33ou49ohe5koimay4gm4ljp4xdt33kklhfypo46dlzg3ogmja9etc3z9twjoypbhykf33k9ou6fuygpv5d6344jrjlrvyi6c0d0wvzzzc8als2rjbwbk1e47zauewpbixu3m692ud5r9loqq20m3jbqenaid9eiszilldmjsrc0qy742q0zrhiuubpxkbe3x3f26qw6lz0d8bp2vzc15eyruhhnd4na1tqhpx9219gqenzppd0j1731s9r0xm33b4rvs3rnjowgxv7n7tsfbh9ixfjvulup8vs3ilpyqxsw4fq5eomc1sun86pi3fb8rnlwtb6qepdrtfi59cgmxikk1wrfjvbsug9wuv7sibvy6lg0vsfdvo9haw5ubku3kuwntgb76ireh4gly7hg7dk9o8yo7uj78iifm9k8xv6f9ze0fb3y == \e\y\2\k\h\c\i\z\z\l\d\4\0\4\2\8\p\y\a\7\r\n\o\e\7\3\o\5\b\i\i\c\g\d\s\2\o\6\r\s\m\p\s\i\3\j\0\y\6\2\x\t\y\c\b\c\r\o\s\v\r\a\m\g\k\d\r\b\k\3\u\0\t\e\z\2\q\3\3\o\u\4\9\o\h\e\5\k\o\i\m\a\y\4\g\m\4\l\j\p\4\x\d\t\3\3\k\k\l\h\f\y\p\o\4\6\d\l\z\g\3\o\g\m\j\a\9\e\t\c\3\z\9\t\w\j\o\y\p\b\h\y\k\f\3\3\k\9\o\u\6\f\u\y\g\p\v\5\d\6\3\4\4\j\r\j\l\r\v\y\i\6\c\0\d\0\w\v\z\z\z\c\8\a\l\s\2\r\j\b\w\b\k\1\e\4\7\z\a\u\e\w\p\b\i\x\u\3\m\6\9\2\u\d\5\r\9\l\o\q\q\2\0\m\3\j\b\q\e\n\a\i\d\9\e\i\s\z\i\l\l\d\m\j\s\r\c\0\q\y\7\4\2\q\0\z\r\h\i\u\u\b\p\x\k\b\e\3\x\3\f\2\6\q\w\6\l\z\0\d\8\b\p\2\v\z\c\1\5\e\y\r\u\h\h\n\d\4\n\a\1\t\q\h\p\x\9\2\1\9\g\q\e\n\z\p\p\d\0\j\1\7\3\1\s\9\r\0\x\m\3\3\b\4\r\v\s\3\r\n\j\o\w\g\x\v\7\n\7\t\s\f\b\h\9\i\x\f\j\v\u\l\u\p\8\v\s\3\i\l\p\y\q\x\s\w\4\f\q\5\e\o\m\c\1\s\u\n\8\6\p\i\3\f\b\8\r\n\l\w\t\b\6\q\e\p\d\r\t\f\i\5\9\c\g\m\x\i\k\k\1\w\r\f\j\v\b\s\u\g\9\w\u\v\7\s\i\b\v\y\6\l\g\0\v\s\f\d\v\o\9\h\a\w\5\u\b\k\u\3\k\u\w\n\t\g\b\7\6\i\r\e\h\4\g\l\y\7\h\g\7\d\k\9\o\8\y\o\7\u\j\7\8\i\i\f\m\9\k\8\x\v\6\f\9\z\e\0\f\b\3\y ]] 00:40:34.066 19:07:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:34.066 19:07:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:34.066 [2024-07-25 19:07:34.502771] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:34.066 [2024-07-25 19:07:34.502997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165463 ] 00:40:34.345 [2024-07-25 19:07:34.685436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.345 [2024-07-25 19:07:34.879154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:35.858  Copying: 512/512 [B] (average 250 kBps) 00:40:35.858 00:40:36.117 19:07:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ey2khcizzld40428pya7rnoe73o5biicgds2o6rsmpsi3j0y62xtycbcrosvramgkdrbk3u0tez2q33ou49ohe5koimay4gm4ljp4xdt33kklhfypo46dlzg3ogmja9etc3z9twjoypbhykf33k9ou6fuygpv5d6344jrjlrvyi6c0d0wvzzzc8als2rjbwbk1e47zauewpbixu3m692ud5r9loqq20m3jbqenaid9eiszilldmjsrc0qy742q0zrhiuubpxkbe3x3f26qw6lz0d8bp2vzc15eyruhhnd4na1tqhpx9219gqenzppd0j1731s9r0xm33b4rvs3rnjowgxv7n7tsfbh9ixfjvulup8vs3ilpyqxsw4fq5eomc1sun86pi3fb8rnlwtb6qepdrtfi59cgmxikk1wrfjvbsug9wuv7sibvy6lg0vsfdvo9haw5ubku3kuwntgb76ireh4gly7hg7dk9o8yo7uj78iifm9k8xv6f9ze0fb3y == \e\y\2\k\h\c\i\z\z\l\d\4\0\4\2\8\p\y\a\7\r\n\o\e\7\3\o\5\b\i\i\c\g\d\s\2\o\6\r\s\m\p\s\i\3\j\0\y\6\2\x\t\y\c\b\c\r\o\s\v\r\a\m\g\k\d\r\b\k\3\u\0\t\e\z\2\q\3\3\o\u\4\9\o\h\e\5\k\o\i\m\a\y\4\g\m\4\l\j\p\4\x\d\t\3\3\k\k\l\h\f\y\p\o\4\6\d\l\z\g\3\o\g\m\j\a\9\e\t\c\3\z\9\t\w\j\o\y\p\b\h\y\k\f\3\3\k\9\o\u\6\f\u\y\g\p\v\5\d\6\3\4\4\j\r\j\l\r\v\y\i\6\c\0\d\0\w\v\z\z\z\c\8\a\l\s\2\r\j\b\w\b\k\1\e\4\7\z\a\u\e\w\p\b\i\x\u\3\m\6\9\2\u\d\5\r\9\l\o\q\q\2\0\m\3\j\b\q\e\n\a\i\d\9\e\i\s\z\i\l\l\d\m\j\s\r\c\0\q\y\7\4\2\q\0\z\r\h\i\u\u\b\p\x\k\b\e\3\x\3\f\2\6\q\w\6\l\z\0\d\8\b\p\2\v\z\c\1\5\e\y\r\u\h\h\n\d\4\n\a\1\t\q\h\p\x\9\2\1\9\g\q\e\n\z\p\p\d\0\j\1\7\3\1\s\9\r\0\x\m\3\3\b\4\r\v\s\3\r\n\j\o\w\g\x\v\7\n\7\t\s\f\b\h\9\i\x\f\j\v\u\l\u\p\8\v\s\3\i\l\p\y\q\x\s\w\4\f\q\5\e\o\m\c\1\s\u\n\8\6\p\i\3\f\b\8\r\n\l\w\t\b\6\q\e\p\d\r\t\f\i\5\9\c\g\m\x\i\k\k\1\w\r\f\j\v\b\s\u\g\9\w\u\v\7\s\i\b\v\y\6\l\g\0\v\s\f\d\v\o\9\h\a\w\5\u\b\k\u\3\k\u\w\n\t\g\b\7\6\i\r\e\h\4\g\l\y\7\h\g\7\d\k\9\o\8\y\o\7\u\j\7\8\i\i\f\m\9\k\8\x\v\6\f\9\z\e\0\f\b\3\y ]] 00:40:36.117 19:07:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:36.117 19:07:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:36.117 [2024-07-25 19:07:36.519163] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:36.117 [2024-07-25 19:07:36.519384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165487 ] 00:40:36.376 [2024-07-25 19:07:36.702228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.376 [2024-07-25 19:07:36.897120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.881  Copying: 512/512 [B] (average 166 kBps) 00:40:37.881 00:40:38.141 ************************************ 00:40:38.141 END TEST dd_flags_misc 00:40:38.141 ************************************ 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ey2khcizzld40428pya7rnoe73o5biicgds2o6rsmpsi3j0y62xtycbcrosvramgkdrbk3u0tez2q33ou49ohe5koimay4gm4ljp4xdt33kklhfypo46dlzg3ogmja9etc3z9twjoypbhykf33k9ou6fuygpv5d6344jrjlrvyi6c0d0wvzzzc8als2rjbwbk1e47zauewpbixu3m692ud5r9loqq20m3jbqenaid9eiszilldmjsrc0qy742q0zrhiuubpxkbe3x3f26qw6lz0d8bp2vzc15eyruhhnd4na1tqhpx9219gqenzppd0j1731s9r0xm33b4rvs3rnjowgxv7n7tsfbh9ixfjvulup8vs3ilpyqxsw4fq5eomc1sun86pi3fb8rnlwtb6qepdrtfi59cgmxikk1wrfjvbsug9wuv7sibvy6lg0vsfdvo9haw5ubku3kuwntgb76ireh4gly7hg7dk9o8yo7uj78iifm9k8xv6f9ze0fb3y == \e\y\2\k\h\c\i\z\z\l\d\4\0\4\2\8\p\y\a\7\r\n\o\e\7\3\o\5\b\i\i\c\g\d\s\2\o\6\r\s\m\p\s\i\3\j\0\y\6\2\x\t\y\c\b\c\r\o\s\v\r\a\m\g\k\d\r\b\k\3\u\0\t\e\z\2\q\3\3\o\u\4\9\o\h\e\5\k\o\i\m\a\y\4\g\m\4\l\j\p\4\x\d\t\3\3\k\k\l\h\f\y\p\o\4\6\d\l\z\g\3\o\g\m\j\a\9\e\t\c\3\z\9\t\w\j\o\y\p\b\h\y\k\f\3\3\k\9\o\u\6\f\u\y\g\p\v\5\d\6\3\4\4\j\r\j\l\r\v\y\i\6\c\0\d\0\w\v\z\z\z\c\8\a\l\s\2\r\j\b\w\b\k\1\e\4\7\z\a\u\e\w\p\b\i\x\u\3\m\6\9\2\u\d\5\r\9\l\o\q\q\2\0\m\3\j\b\q\e\n\a\i\d\9\e\i\s\z\i\l\l\d\m\j\s\r\c\0\q\y\7\4\2\q\0\z\r\h\i\u\u\b\p\x\k\b\e\3\x\3\f\2\6\q\w\6\l\z\0\d\8\b\p\2\v\z\c\1\5\e\y\r\u\h\h\n\d\4\n\a\1\t\q\h\p\x\9\2\1\9\g\q\e\n\z\p\p\d\0\j\1\7\3\1\s\9\r\0\x\m\3\3\b\4\r\v\s\3\r\n\j\o\w\g\x\v\7\n\7\t\s\f\b\h\9\i\x\f\j\v\u\l\u\p\8\v\s\3\i\l\p\y\q\x\s\w\4\f\q\5\e\o\m\c\1\s\u\n\8\6\p\i\3\f\b\8\r\n\l\w\t\b\6\q\e\p\d\r\t\f\i\5\9\c\g\m\x\i\k\k\1\w\r\f\j\v\b\s\u\g\9\w\u\v\7\s\i\b\v\y\6\l\g\0\v\s\f\d\v\o\9\h\a\w\5\u\b\k\u\3\k\u\w\n\t\g\b\7\6\i\r\e\h\4\g\l\y\7\h\g\7\d\k\9\o\8\y\o\7\u\j\7\8\i\i\f\m\9\k\8\x\v\6\f\9\z\e\0\f\b\3\y ]] 00:40:38.141 00:40:38.141 real 0m16.678s 00:40:38.141 user 0m13.490s 00:40:38.141 sys 0m2.047s 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:40:38.141 * Second test run, using AIO 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:38.141 ************************************ 00:40:38.141 START TEST dd_flag_append_forced_aio 00:40:38.141 ************************************ 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=jzkcn4cw5o6vmenh4wf09aw9nnuo4v0f 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=m2va5eg1gzxsvgyzhc6c7i1tbdwie6pp 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s jzkcn4cw5o6vmenh4wf09aw9nnuo4v0f 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s m2va5eg1gzxsvgyzhc6c7i1tbdwie6pp 00:40:38.141 19:07:38 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:40:38.141 [2024-07-25 19:07:38.643655] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:38.141 [2024-07-25 19:07:38.644249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165537 ] 00:40:38.400 [2024-07-25 19:07:38.829376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.659 [2024-07-25 19:07:39.027743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.297  Copying: 32/32 [B] (average 31 kBps) 00:40:40.297 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ m2va5eg1gzxsvgyzhc6c7i1tbdwie6ppjzkcn4cw5o6vmenh4wf09aw9nnuo4v0f == \m\2\v\a\5\e\g\1\g\z\x\s\v\g\y\z\h\c\6\c\7\i\1\t\b\d\w\i\e\6\p\p\j\z\k\c\n\4\c\w\5\o\6\v\m\e\n\h\4\w\f\0\9\a\w\9\n\n\u\o\4\v\0\f ]] 00:40:40.297 00:40:40.297 real 0m2.028s 00:40:40.297 user 0m1.682s 00:40:40.297 sys 0m0.212s 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:40.297 ************************************ 00:40:40.297 END TEST dd_flag_append_forced_aio 00:40:40.297 ************************************ 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:40.297 ************************************ 00:40:40.297 START TEST dd_flag_directory_forced_aio 00:40:40.297 ************************************ 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:40.297 19:07:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:40.297 [2024-07-25 19:07:40.740603] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:40.297 [2024-07-25 19:07:40.741025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165587 ] 00:40:40.556 [2024-07-25 19:07:40.921133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.556 [2024-07-25 19:07:41.112202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.122 [2024-07-25 19:07:41.434817] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:41.122 [2024-07-25 19:07:41.435186] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:41.122 [2024-07-25 19:07:41.435257] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:41.687 [2024-07-25 19:07:42.207637] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:42.255 19:07:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:42.255 [2024-07-25 19:07:42.696271] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:42.255 [2024-07-25 19:07:42.696672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165620 ] 00:40:42.514 [2024-07-25 19:07:42.854579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.514 [2024-07-25 19:07:43.047640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.081 [2024-07-25 19:07:43.367540] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:43.081 [2024-07-25 19:07:43.367864] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:43.081 [2024-07-25 19:07:43.367934] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:43.645 [2024-07-25 19:07:44.133612] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:40:44.212 ************************************ 00:40:44.212 END TEST dd_flag_directory_forced_aio 00:40:44.212 ************************************ 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:44.212 00:40:44.212 real 0m3.904s 00:40:44.212 user 0m3.267s 00:40:44.212 sys 0m0.431s 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:44.212 ************************************ 00:40:44.212 START TEST dd_flag_nofollow_forced_aio 00:40:44.212 ************************************ 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:44.212 19:07:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:44.212 [2024-07-25 19:07:44.734319] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:44.212 [2024-07-25 19:07:44.734848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165665 ] 00:40:44.470 [2024-07-25 19:07:44.915285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.728 [2024-07-25 19:07:45.109837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.987 [2024-07-25 19:07:45.427120] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:44.987 [2024-07-25 19:07:45.427454] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:44.987 [2024-07-25 19:07:45.427533] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:45.922 [2024-07-25 19:07:46.199122] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:46.180 19:07:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:46.180 [2024-07-25 19:07:46.712586] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:46.180 [2024-07-25 19:07:46.713621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165698 ] 00:40:46.439 [2024-07-25 19:07:46.897026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:46.696 [2024-07-25 19:07:47.100043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.953 [2024-07-25 19:07:47.403498] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:46.953 [2024-07-25 19:07:47.403578] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:46.953 [2024-07-25 19:07:47.403611] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:47.887 [2024-07-25 19:07:48.177233] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:48.146 19:07:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:48.146 [2024-07-25 19:07:48.686801] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:48.146 [2024-07-25 19:07:48.687014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165720 ] 00:40:48.405 [2024-07-25 19:07:48.869135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.664 [2024-07-25 19:07:49.073068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.299  Copying: 512/512 [B] (average 500 kBps) 00:40:50.299 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ j4blzcrsu0asrc7kh8s3r0l5m4eo22iefhzezwuv40yilmnv5b5ljzti39m74e9hw93kbd6lmjhl41v1oav0u7h72cjymwd18gci8k6c25b3k93uz0ftnh8fbafezeqgbu9i6g12w5zqhsekcv3at08yay5e7eibyusr71f56d36qs19iv7mevjo1k1pswu8jd7ppr428arj5himvj02r700n67o1ii6z18c9spmdndo9n3vg177ultqgi08x357ztdalmk5gy1pihmxbhscr05lp3y238gr5a2y6kem024ixk6fe5boxifzimogdyfbr4r16ybk5y6ifk92jeei8o18bw6vqf4z6cwohe4jjdcrme0awtnsi39ota5ui08dw6ji482thd7c8te5uvymltav3oep8ll5i9tg2ip5bbarsz3mbas844jr32nzumpzvvq5kxaj4uhygfb73tp5kg2w2pmda5cqshtq1m4yss1wer3bz8x1d0lo6x8ur5oz == \j\4\b\l\z\c\r\s\u\0\a\s\r\c\7\k\h\8\s\3\r\0\l\5\m\4\e\o\2\2\i\e\f\h\z\e\z\w\u\v\4\0\y\i\l\m\n\v\5\b\5\l\j\z\t\i\3\9\m\7\4\e\9\h\w\9\3\k\b\d\6\l\m\j\h\l\4\1\v\1\o\a\v\0\u\7\h\7\2\c\j\y\m\w\d\1\8\g\c\i\8\k\6\c\2\5\b\3\k\9\3\u\z\0\f\t\n\h\8\f\b\a\f\e\z\e\q\g\b\u\9\i\6\g\1\2\w\5\z\q\h\s\e\k\c\v\3\a\t\0\8\y\a\y\5\e\7\e\i\b\y\u\s\r\7\1\f\5\6\d\3\6\q\s\1\9\i\v\7\m\e\v\j\o\1\k\1\p\s\w\u\8\j\d\7\p\p\r\4\2\8\a\r\j\5\h\i\m\v\j\0\2\r\7\0\0\n\6\7\o\1\i\i\6\z\1\8\c\9\s\p\m\d\n\d\o\9\n\3\v\g\1\7\7\u\l\t\q\g\i\0\8\x\3\5\7\z\t\d\a\l\m\k\5\g\y\1\p\i\h\m\x\b\h\s\c\r\0\5\l\p\3\y\2\3\8\g\r\5\a\2\y\6\k\e\m\0\2\4\i\x\k\6\f\e\5\b\o\x\i\f\z\i\m\o\g\d\y\f\b\r\4\r\1\6\y\b\k\5\y\6\i\f\k\9\2\j\e\e\i\8\o\1\8\b\w\6\v\q\f\4\z\6\c\w\o\h\e\4\j\j\d\c\r\m\e\0\a\w\t\n\s\i\3\9\o\t\a\5\u\i\0\8\d\w\6\j\i\4\8\2\t\h\d\7\c\8\t\e\5\u\v\y\m\l\t\a\v\3\o\e\p\8\l\l\5\i\9\t\g\2\i\p\5\b\b\a\r\s\z\3\m\b\a\s\8\4\4\j\r\3\2\n\z\u\m\p\z\v\v\q\5\k\x\a\j\4\u\h\y\g\f\b\7\3\t\p\5\k\g\2\w\2\p\m\d\a\5\c\q\s\h\t\q\1\m\4\y\s\s\1\w\e\r\3\b\z\8\x\1\d\0\l\o\6\x\8\u\r\5\o\z ]] 00:40:50.299 00:40:50.299 real 0m5.996s 00:40:50.299 user 0m4.920s 00:40:50.299 sys 0m0.735s 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:50.299 ************************************ 00:40:50.299 END TEST dd_flag_nofollow_forced_aio 00:40:50.299 ************************************ 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:50.299 ************************************ 00:40:50.299 START TEST dd_flag_noatime_forced_aio 00:40:50.299 ************************************ 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721934469 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721934470 00:40:50.299 19:07:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:40:51.233 19:07:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:51.234 [2024-07-25 19:07:51.810404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:51.234 [2024-07-25 19:07:51.810623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165787 ] 00:40:51.492 [2024-07-25 19:07:51.992398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.751 [2024-07-25 19:07:52.188486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:53.386  Copying: 512/512 [B] (average 500 kBps) 00:40:53.386 00:40:53.386 19:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:53.386 19:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721934469 )) 00:40:53.386 19:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:53.386 19:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721934470 )) 00:40:53.386 19:07:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:53.386 [2024-07-25 19:07:53.823976] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:53.386 [2024-07-25 19:07:53.824190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165817 ] 00:40:53.645 [2024-07-25 19:07:54.005811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.645 [2024-07-25 19:07:54.206449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.589  Copying: 512/512 [B] (average 500 kBps) 00:40:55.589 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721934474 )) 00:40:55.589 00:40:55.589 real 0m5.064s 00:40:55.589 user 0m3.272s 00:40:55.589 sys 0m0.533s 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:55.589 ************************************ 00:40:55.589 END TEST dd_flag_noatime_forced_aio 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:55.589 ************************************ 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:55.589 ************************************ 00:40:55.589 START TEST dd_flags_misc_forced_aio 00:40:55.589 ************************************ 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:55.589 19:07:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:55.589 [2024-07-25 19:07:55.916442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:55.589 [2024-07-25 19:07:55.916664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165864 ] 00:40:55.589 [2024-07-25 19:07:56.098116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.847 [2024-07-25 19:07:56.294183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.482  Copying: 512/512 [B] (average 500 kBps) 00:40:57.482 00:40:57.482 19:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rbcl395peh2x3naya3wywnt6g464nfn47o7tm3wl0g7uva67dtkhxmu9y8afp1k47wt00ermhgi1akdvsk60de5gvssempeuzww6nb1mfz7f1ri3nrkv2297r1eunrjs7efyndrtjmz6znxcm82nvtqkgruy7vx8kxjbaue1utwud9gdrd3bw7newh3m45wf7f11kf55qtnlaswqgnmnr9nrwe7d1ph71nncthm4nm71w83d88igpnm7kuy39a1j6wobjau34fele97sjzfeg428j3e8zbjtpw4wfz1i42m47k4308bf7o6vj37app59x4vvz7i7ve1m5dgooq51m2p7egfebmmmugsi84xendncbrdtnz631sm9203zbrig60oy1mfs6ceptnto1m72mexijgnibyjgof96jch9nmm7muyjzh5e27ouqw1jj8eyepuymaqn8zp9t5sm4hhfse154wx4kmqf0cj0jxqqku4s5bp0hl8tn7e6g268y4s0 == \r\b\c\l\3\9\5\p\e\h\2\x\3\n\a\y\a\3\w\y\w\n\t\6\g\4\6\4\n\f\n\4\7\o\7\t\m\3\w\l\0\g\7\u\v\a\6\7\d\t\k\h\x\m\u\9\y\8\a\f\p\1\k\4\7\w\t\0\0\e\r\m\h\g\i\1\a\k\d\v\s\k\6\0\d\e\5\g\v\s\s\e\m\p\e\u\z\w\w\6\n\b\1\m\f\z\7\f\1\r\i\3\n\r\k\v\2\2\9\7\r\1\e\u\n\r\j\s\7\e\f\y\n\d\r\t\j\m\z\6\z\n\x\c\m\8\2\n\v\t\q\k\g\r\u\y\7\v\x\8\k\x\j\b\a\u\e\1\u\t\w\u\d\9\g\d\r\d\3\b\w\7\n\e\w\h\3\m\4\5\w\f\7\f\1\1\k\f\5\5\q\t\n\l\a\s\w\q\g\n\m\n\r\9\n\r\w\e\7\d\1\p\h\7\1\n\n\c\t\h\m\4\n\m\7\1\w\8\3\d\8\8\i\g\p\n\m\7\k\u\y\3\9\a\1\j\6\w\o\b\j\a\u\3\4\f\e\l\e\9\7\s\j\z\f\e\g\4\2\8\j\3\e\8\z\b\j\t\p\w\4\w\f\z\1\i\4\2\m\4\7\k\4\3\0\8\b\f\7\o\6\v\j\3\7\a\p\p\5\9\x\4\v\v\z\7\i\7\v\e\1\m\5\d\g\o\o\q\5\1\m\2\p\7\e\g\f\e\b\m\m\m\u\g\s\i\8\4\x\e\n\d\n\c\b\r\d\t\n\z\6\3\1\s\m\9\2\0\3\z\b\r\i\g\6\0\o\y\1\m\f\s\6\c\e\p\t\n\t\o\1\m\7\2\m\e\x\i\j\g\n\i\b\y\j\g\o\f\9\6\j\c\h\9\n\m\m\7\m\u\y\j\z\h\5\e\2\7\o\u\q\w\1\j\j\8\e\y\e\p\u\y\m\a\q\n\8\z\p\9\t\5\s\m\4\h\h\f\s\e\1\5\4\w\x\4\k\m\q\f\0\c\j\0\j\x\q\q\k\u\4\s\5\b\p\0\h\l\8\t\n\7\e\6\g\2\6\8\y\4\s\0 ]] 00:40:57.482 19:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:57.482 19:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:57.482 [2024-07-25 19:07:57.935752] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:57.482 [2024-07-25 19:07:57.935957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165893 ] 00:40:57.740 [2024-07-25 19:07:58.109842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.740 [2024-07-25 19:07:58.304429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.754  Copying: 512/512 [B] (average 500 kBps) 00:40:59.754 00:40:59.754 19:07:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rbcl395peh2x3naya3wywnt6g464nfn47o7tm3wl0g7uva67dtkhxmu9y8afp1k47wt00ermhgi1akdvsk60de5gvssempeuzww6nb1mfz7f1ri3nrkv2297r1eunrjs7efyndrtjmz6znxcm82nvtqkgruy7vx8kxjbaue1utwud9gdrd3bw7newh3m45wf7f11kf55qtnlaswqgnmnr9nrwe7d1ph71nncthm4nm71w83d88igpnm7kuy39a1j6wobjau34fele97sjzfeg428j3e8zbjtpw4wfz1i42m47k4308bf7o6vj37app59x4vvz7i7ve1m5dgooq51m2p7egfebmmmugsi84xendncbrdtnz631sm9203zbrig60oy1mfs6ceptnto1m72mexijgnibyjgof96jch9nmm7muyjzh5e27ouqw1jj8eyepuymaqn8zp9t5sm4hhfse154wx4kmqf0cj0jxqqku4s5bp0hl8tn7e6g268y4s0 == \r\b\c\l\3\9\5\p\e\h\2\x\3\n\a\y\a\3\w\y\w\n\t\6\g\4\6\4\n\f\n\4\7\o\7\t\m\3\w\l\0\g\7\u\v\a\6\7\d\t\k\h\x\m\u\9\y\8\a\f\p\1\k\4\7\w\t\0\0\e\r\m\h\g\i\1\a\k\d\v\s\k\6\0\d\e\5\g\v\s\s\e\m\p\e\u\z\w\w\6\n\b\1\m\f\z\7\f\1\r\i\3\n\r\k\v\2\2\9\7\r\1\e\u\n\r\j\s\7\e\f\y\n\d\r\t\j\m\z\6\z\n\x\c\m\8\2\n\v\t\q\k\g\r\u\y\7\v\x\8\k\x\j\b\a\u\e\1\u\t\w\u\d\9\g\d\r\d\3\b\w\7\n\e\w\h\3\m\4\5\w\f\7\f\1\1\k\f\5\5\q\t\n\l\a\s\w\q\g\n\m\n\r\9\n\r\w\e\7\d\1\p\h\7\1\n\n\c\t\h\m\4\n\m\7\1\w\8\3\d\8\8\i\g\p\n\m\7\k\u\y\3\9\a\1\j\6\w\o\b\j\a\u\3\4\f\e\l\e\9\7\s\j\z\f\e\g\4\2\8\j\3\e\8\z\b\j\t\p\w\4\w\f\z\1\i\4\2\m\4\7\k\4\3\0\8\b\f\7\o\6\v\j\3\7\a\p\p\5\9\x\4\v\v\z\7\i\7\v\e\1\m\5\d\g\o\o\q\5\1\m\2\p\7\e\g\f\e\b\m\m\m\u\g\s\i\8\4\x\e\n\d\n\c\b\r\d\t\n\z\6\3\1\s\m\9\2\0\3\z\b\r\i\g\6\0\o\y\1\m\f\s\6\c\e\p\t\n\t\o\1\m\7\2\m\e\x\i\j\g\n\i\b\y\j\g\o\f\9\6\j\c\h\9\n\m\m\7\m\u\y\j\z\h\5\e\2\7\o\u\q\w\1\j\j\8\e\y\e\p\u\y\m\a\q\n\8\z\p\9\t\5\s\m\4\h\h\f\s\e\1\5\4\w\x\4\k\m\q\f\0\c\j\0\j\x\q\q\k\u\4\s\5\b\p\0\h\l\8\t\n\7\e\6\g\2\6\8\y\4\s\0 ]] 00:40:59.754 19:07:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:59.754 19:07:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:59.755 [2024-07-25 19:07:59.939332] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:59.755 [2024-07-25 19:07:59.939533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165917 ] 00:40:59.755 [2024-07-25 19:08:00.118481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.055 [2024-07-25 19:08:00.309388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.432  Copying: 512/512 [B] (average 166 kBps) 00:41:01.432 00:41:01.432 19:08:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rbcl395peh2x3naya3wywnt6g464nfn47o7tm3wl0g7uva67dtkhxmu9y8afp1k47wt00ermhgi1akdvsk60de5gvssempeuzww6nb1mfz7f1ri3nrkv2297r1eunrjs7efyndrtjmz6znxcm82nvtqkgruy7vx8kxjbaue1utwud9gdrd3bw7newh3m45wf7f11kf55qtnlaswqgnmnr9nrwe7d1ph71nncthm4nm71w83d88igpnm7kuy39a1j6wobjau34fele97sjzfeg428j3e8zbjtpw4wfz1i42m47k4308bf7o6vj37app59x4vvz7i7ve1m5dgooq51m2p7egfebmmmugsi84xendncbrdtnz631sm9203zbrig60oy1mfs6ceptnto1m72mexijgnibyjgof96jch9nmm7muyjzh5e27ouqw1jj8eyepuymaqn8zp9t5sm4hhfse154wx4kmqf0cj0jxqqku4s5bp0hl8tn7e6g268y4s0 == \r\b\c\l\3\9\5\p\e\h\2\x\3\n\a\y\a\3\w\y\w\n\t\6\g\4\6\4\n\f\n\4\7\o\7\t\m\3\w\l\0\g\7\u\v\a\6\7\d\t\k\h\x\m\u\9\y\8\a\f\p\1\k\4\7\w\t\0\0\e\r\m\h\g\i\1\a\k\d\v\s\k\6\0\d\e\5\g\v\s\s\e\m\p\e\u\z\w\w\6\n\b\1\m\f\z\7\f\1\r\i\3\n\r\k\v\2\2\9\7\r\1\e\u\n\r\j\s\7\e\f\y\n\d\r\t\j\m\z\6\z\n\x\c\m\8\2\n\v\t\q\k\g\r\u\y\7\v\x\8\k\x\j\b\a\u\e\1\u\t\w\u\d\9\g\d\r\d\3\b\w\7\n\e\w\h\3\m\4\5\w\f\7\f\1\1\k\f\5\5\q\t\n\l\a\s\w\q\g\n\m\n\r\9\n\r\w\e\7\d\1\p\h\7\1\n\n\c\t\h\m\4\n\m\7\1\w\8\3\d\8\8\i\g\p\n\m\7\k\u\y\3\9\a\1\j\6\w\o\b\j\a\u\3\4\f\e\l\e\9\7\s\j\z\f\e\g\4\2\8\j\3\e\8\z\b\j\t\p\w\4\w\f\z\1\i\4\2\m\4\7\k\4\3\0\8\b\f\7\o\6\v\j\3\7\a\p\p\5\9\x\4\v\v\z\7\i\7\v\e\1\m\5\d\g\o\o\q\5\1\m\2\p\7\e\g\f\e\b\m\m\m\u\g\s\i\8\4\x\e\n\d\n\c\b\r\d\t\n\z\6\3\1\s\m\9\2\0\3\z\b\r\i\g\6\0\o\y\1\m\f\s\6\c\e\p\t\n\t\o\1\m\7\2\m\e\x\i\j\g\n\i\b\y\j\g\o\f\9\6\j\c\h\9\n\m\m\7\m\u\y\j\z\h\5\e\2\7\o\u\q\w\1\j\j\8\e\y\e\p\u\y\m\a\q\n\8\z\p\9\t\5\s\m\4\h\h\f\s\e\1\5\4\w\x\4\k\m\q\f\0\c\j\0\j\x\q\q\k\u\4\s\5\b\p\0\h\l\8\t\n\7\e\6\g\2\6\8\y\4\s\0 ]] 00:41:01.432 19:08:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:01.433 19:08:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:01.433 [2024-07-25 19:08:01.956332] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:01.433 [2024-07-25 19:08:01.956547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165948 ] 00:41:01.691 [2024-07-25 19:08:02.138225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.949 [2024-07-25 19:08:02.331884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:03.582  Copying: 512/512 [B] (average 125 kBps) 00:41:03.582 00:41:03.582 19:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rbcl395peh2x3naya3wywnt6g464nfn47o7tm3wl0g7uva67dtkhxmu9y8afp1k47wt00ermhgi1akdvsk60de5gvssempeuzww6nb1mfz7f1ri3nrkv2297r1eunrjs7efyndrtjmz6znxcm82nvtqkgruy7vx8kxjbaue1utwud9gdrd3bw7newh3m45wf7f11kf55qtnlaswqgnmnr9nrwe7d1ph71nncthm4nm71w83d88igpnm7kuy39a1j6wobjau34fele97sjzfeg428j3e8zbjtpw4wfz1i42m47k4308bf7o6vj37app59x4vvz7i7ve1m5dgooq51m2p7egfebmmmugsi84xendncbrdtnz631sm9203zbrig60oy1mfs6ceptnto1m72mexijgnibyjgof96jch9nmm7muyjzh5e27ouqw1jj8eyepuymaqn8zp9t5sm4hhfse154wx4kmqf0cj0jxqqku4s5bp0hl8tn7e6g268y4s0 == \r\b\c\l\3\9\5\p\e\h\2\x\3\n\a\y\a\3\w\y\w\n\t\6\g\4\6\4\n\f\n\4\7\o\7\t\m\3\w\l\0\g\7\u\v\a\6\7\d\t\k\h\x\m\u\9\y\8\a\f\p\1\k\4\7\w\t\0\0\e\r\m\h\g\i\1\a\k\d\v\s\k\6\0\d\e\5\g\v\s\s\e\m\p\e\u\z\w\w\6\n\b\1\m\f\z\7\f\1\r\i\3\n\r\k\v\2\2\9\7\r\1\e\u\n\r\j\s\7\e\f\y\n\d\r\t\j\m\z\6\z\n\x\c\m\8\2\n\v\t\q\k\g\r\u\y\7\v\x\8\k\x\j\b\a\u\e\1\u\t\w\u\d\9\g\d\r\d\3\b\w\7\n\e\w\h\3\m\4\5\w\f\7\f\1\1\k\f\5\5\q\t\n\l\a\s\w\q\g\n\m\n\r\9\n\r\w\e\7\d\1\p\h\7\1\n\n\c\t\h\m\4\n\m\7\1\w\8\3\d\8\8\i\g\p\n\m\7\k\u\y\3\9\a\1\j\6\w\o\b\j\a\u\3\4\f\e\l\e\9\7\s\j\z\f\e\g\4\2\8\j\3\e\8\z\b\j\t\p\w\4\w\f\z\1\i\4\2\m\4\7\k\4\3\0\8\b\f\7\o\6\v\j\3\7\a\p\p\5\9\x\4\v\v\z\7\i\7\v\e\1\m\5\d\g\o\o\q\5\1\m\2\p\7\e\g\f\e\b\m\m\m\u\g\s\i\8\4\x\e\n\d\n\c\b\r\d\t\n\z\6\3\1\s\m\9\2\0\3\z\b\r\i\g\6\0\o\y\1\m\f\s\6\c\e\p\t\n\t\o\1\m\7\2\m\e\x\i\j\g\n\i\b\y\j\g\o\f\9\6\j\c\h\9\n\m\m\7\m\u\y\j\z\h\5\e\2\7\o\u\q\w\1\j\j\8\e\y\e\p\u\y\m\a\q\n\8\z\p\9\t\5\s\m\4\h\h\f\s\e\1\5\4\w\x\4\k\m\q\f\0\c\j\0\j\x\q\q\k\u\4\s\5\b\p\0\h\l\8\t\n\7\e\6\g\2\6\8\y\4\s\0 ]] 00:41:03.582 19:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:03.582 19:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:41:03.582 19:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:03.582 19:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:03.582 19:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:03.582 19:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:03.582 [2024-07-25 19:08:03.971940] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:03.582 [2024-07-25 19:08:03.972090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165972 ] 00:41:03.582 [2024-07-25 19:08:04.130140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.841 [2024-07-25 19:08:04.322278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:05.477  Copying: 512/512 [B] (average 500 kBps) 00:41:05.477 00:41:05.477 19:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75bfdhdrhtnr4ln84rs8ydl89hxbgxwxrkk4ngakbw9ujk426eax7rtvhsunidl0t1lpzq8vsqa7m0jyc20xluzwdws303b0d2kjx8t2na8vjh8v60qax8b80eek4spwxge8s224gxjsmp66jtg24mva4i95eu8r6zg3tm44hyes26rxqit6mouvyr2neg8dbhibnfo24aft14xcz7ou4rbxpchg6jud6dmawzjkzo3n8cndvuaqwjyd0hexudcd515r65sfvqmsd32kisdmuxflbfp1pvtub7354a5dum1eornxgv7jz3dvay82kygnnfsyee1srpr9zj2f6kwqlrmwx40p9hvv4uqh8u5sfgf9frrt20e65mkg4ztkb5wwc3t5cwp3q5pct80irek8viejbs8ptqnj2ejstibc6ly5wb6zsinkhfyrkuel4os32y1ac1gf9bhj176c1g2fdy850dfrxgznyk3burm6bi8gtb05czuknotoyzc6f850 == \7\5\b\f\d\h\d\r\h\t\n\r\4\l\n\8\4\r\s\8\y\d\l\8\9\h\x\b\g\x\w\x\r\k\k\4\n\g\a\k\b\w\9\u\j\k\4\2\6\e\a\x\7\r\t\v\h\s\u\n\i\d\l\0\t\1\l\p\z\q\8\v\s\q\a\7\m\0\j\y\c\2\0\x\l\u\z\w\d\w\s\3\0\3\b\0\d\2\k\j\x\8\t\2\n\a\8\v\j\h\8\v\6\0\q\a\x\8\b\8\0\e\e\k\4\s\p\w\x\g\e\8\s\2\2\4\g\x\j\s\m\p\6\6\j\t\g\2\4\m\v\a\4\i\9\5\e\u\8\r\6\z\g\3\t\m\4\4\h\y\e\s\2\6\r\x\q\i\t\6\m\o\u\v\y\r\2\n\e\g\8\d\b\h\i\b\n\f\o\2\4\a\f\t\1\4\x\c\z\7\o\u\4\r\b\x\p\c\h\g\6\j\u\d\6\d\m\a\w\z\j\k\z\o\3\n\8\c\n\d\v\u\a\q\w\j\y\d\0\h\e\x\u\d\c\d\5\1\5\r\6\5\s\f\v\q\m\s\d\3\2\k\i\s\d\m\u\x\f\l\b\f\p\1\p\v\t\u\b\7\3\5\4\a\5\d\u\m\1\e\o\r\n\x\g\v\7\j\z\3\d\v\a\y\8\2\k\y\g\n\n\f\s\y\e\e\1\s\r\p\r\9\z\j\2\f\6\k\w\q\l\r\m\w\x\4\0\p\9\h\v\v\4\u\q\h\8\u\5\s\f\g\f\9\f\r\r\t\2\0\e\6\5\m\k\g\4\z\t\k\b\5\w\w\c\3\t\5\c\w\p\3\q\5\p\c\t\8\0\i\r\e\k\8\v\i\e\j\b\s\8\p\t\q\n\j\2\e\j\s\t\i\b\c\6\l\y\5\w\b\6\z\s\i\n\k\h\f\y\r\k\u\e\l\4\o\s\3\2\y\1\a\c\1\g\f\9\b\h\j\1\7\6\c\1\g\2\f\d\y\8\5\0\d\f\r\x\g\z\n\y\k\3\b\u\r\m\6\b\i\8\g\t\b\0\5\c\z\u\k\n\o\t\o\y\z\c\6\f\8\5\0 ]] 00:41:05.477 19:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:05.477 19:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:05.477 [2024-07-25 19:08:05.975064] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:05.477 [2024-07-25 19:08:05.975296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166001 ] 00:41:05.736 [2024-07-25 19:08:06.158391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.995 [2024-07-25 19:08:06.354968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.630  Copying: 512/512 [B] (average 500 kBps) 00:41:07.630 00:41:07.630 19:08:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75bfdhdrhtnr4ln84rs8ydl89hxbgxwxrkk4ngakbw9ujk426eax7rtvhsunidl0t1lpzq8vsqa7m0jyc20xluzwdws303b0d2kjx8t2na8vjh8v60qax8b80eek4spwxge8s224gxjsmp66jtg24mva4i95eu8r6zg3tm44hyes26rxqit6mouvyr2neg8dbhibnfo24aft14xcz7ou4rbxpchg6jud6dmawzjkzo3n8cndvuaqwjyd0hexudcd515r65sfvqmsd32kisdmuxflbfp1pvtub7354a5dum1eornxgv7jz3dvay82kygnnfsyee1srpr9zj2f6kwqlrmwx40p9hvv4uqh8u5sfgf9frrt20e65mkg4ztkb5wwc3t5cwp3q5pct80irek8viejbs8ptqnj2ejstibc6ly5wb6zsinkhfyrkuel4os32y1ac1gf9bhj176c1g2fdy850dfrxgznyk3burm6bi8gtb05czuknotoyzc6f850 == \7\5\b\f\d\h\d\r\h\t\n\r\4\l\n\8\4\r\s\8\y\d\l\8\9\h\x\b\g\x\w\x\r\k\k\4\n\g\a\k\b\w\9\u\j\k\4\2\6\e\a\x\7\r\t\v\h\s\u\n\i\d\l\0\t\1\l\p\z\q\8\v\s\q\a\7\m\0\j\y\c\2\0\x\l\u\z\w\d\w\s\3\0\3\b\0\d\2\k\j\x\8\t\2\n\a\8\v\j\h\8\v\6\0\q\a\x\8\b\8\0\e\e\k\4\s\p\w\x\g\e\8\s\2\2\4\g\x\j\s\m\p\6\6\j\t\g\2\4\m\v\a\4\i\9\5\e\u\8\r\6\z\g\3\t\m\4\4\h\y\e\s\2\6\r\x\q\i\t\6\m\o\u\v\y\r\2\n\e\g\8\d\b\h\i\b\n\f\o\2\4\a\f\t\1\4\x\c\z\7\o\u\4\r\b\x\p\c\h\g\6\j\u\d\6\d\m\a\w\z\j\k\z\o\3\n\8\c\n\d\v\u\a\q\w\j\y\d\0\h\e\x\u\d\c\d\5\1\5\r\6\5\s\f\v\q\m\s\d\3\2\k\i\s\d\m\u\x\f\l\b\f\p\1\p\v\t\u\b\7\3\5\4\a\5\d\u\m\1\e\o\r\n\x\g\v\7\j\z\3\d\v\a\y\8\2\k\y\g\n\n\f\s\y\e\e\1\s\r\p\r\9\z\j\2\f\6\k\w\q\l\r\m\w\x\4\0\p\9\h\v\v\4\u\q\h\8\u\5\s\f\g\f\9\f\r\r\t\2\0\e\6\5\m\k\g\4\z\t\k\b\5\w\w\c\3\t\5\c\w\p\3\q\5\p\c\t\8\0\i\r\e\k\8\v\i\e\j\b\s\8\p\t\q\n\j\2\e\j\s\t\i\b\c\6\l\y\5\w\b\6\z\s\i\n\k\h\f\y\r\k\u\e\l\4\o\s\3\2\y\1\a\c\1\g\f\9\b\h\j\1\7\6\c\1\g\2\f\d\y\8\5\0\d\f\r\x\g\z\n\y\k\3\b\u\r\m\6\b\i\8\g\t\b\0\5\c\z\u\k\n\o\t\o\y\z\c\6\f\8\5\0 ]] 00:41:07.630 19:08:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:07.630 19:08:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:07.630 [2024-07-25 19:08:07.973109] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:07.630 [2024-07-25 19:08:07.973252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166025 ] 00:41:07.630 [2024-07-25 19:08:08.129642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.888 [2024-07-25 19:08:08.322312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.523  Copying: 512/512 [B] (average 250 kBps) 00:41:09.523 00:41:09.523 19:08:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75bfdhdrhtnr4ln84rs8ydl89hxbgxwxrkk4ngakbw9ujk426eax7rtvhsunidl0t1lpzq8vsqa7m0jyc20xluzwdws303b0d2kjx8t2na8vjh8v60qax8b80eek4spwxge8s224gxjsmp66jtg24mva4i95eu8r6zg3tm44hyes26rxqit6mouvyr2neg8dbhibnfo24aft14xcz7ou4rbxpchg6jud6dmawzjkzo3n8cndvuaqwjyd0hexudcd515r65sfvqmsd32kisdmuxflbfp1pvtub7354a5dum1eornxgv7jz3dvay82kygnnfsyee1srpr9zj2f6kwqlrmwx40p9hvv4uqh8u5sfgf9frrt20e65mkg4ztkb5wwc3t5cwp3q5pct80irek8viejbs8ptqnj2ejstibc6ly5wb6zsinkhfyrkuel4os32y1ac1gf9bhj176c1g2fdy850dfrxgznyk3burm6bi8gtb05czuknotoyzc6f850 == \7\5\b\f\d\h\d\r\h\t\n\r\4\l\n\8\4\r\s\8\y\d\l\8\9\h\x\b\g\x\w\x\r\k\k\4\n\g\a\k\b\w\9\u\j\k\4\2\6\e\a\x\7\r\t\v\h\s\u\n\i\d\l\0\t\1\l\p\z\q\8\v\s\q\a\7\m\0\j\y\c\2\0\x\l\u\z\w\d\w\s\3\0\3\b\0\d\2\k\j\x\8\t\2\n\a\8\v\j\h\8\v\6\0\q\a\x\8\b\8\0\e\e\k\4\s\p\w\x\g\e\8\s\2\2\4\g\x\j\s\m\p\6\6\j\t\g\2\4\m\v\a\4\i\9\5\e\u\8\r\6\z\g\3\t\m\4\4\h\y\e\s\2\6\r\x\q\i\t\6\m\o\u\v\y\r\2\n\e\g\8\d\b\h\i\b\n\f\o\2\4\a\f\t\1\4\x\c\z\7\o\u\4\r\b\x\p\c\h\g\6\j\u\d\6\d\m\a\w\z\j\k\z\o\3\n\8\c\n\d\v\u\a\q\w\j\y\d\0\h\e\x\u\d\c\d\5\1\5\r\6\5\s\f\v\q\m\s\d\3\2\k\i\s\d\m\u\x\f\l\b\f\p\1\p\v\t\u\b\7\3\5\4\a\5\d\u\m\1\e\o\r\n\x\g\v\7\j\z\3\d\v\a\y\8\2\k\y\g\n\n\f\s\y\e\e\1\s\r\p\r\9\z\j\2\f\6\k\w\q\l\r\m\w\x\4\0\p\9\h\v\v\4\u\q\h\8\u\5\s\f\g\f\9\f\r\r\t\2\0\e\6\5\m\k\g\4\z\t\k\b\5\w\w\c\3\t\5\c\w\p\3\q\5\p\c\t\8\0\i\r\e\k\8\v\i\e\j\b\s\8\p\t\q\n\j\2\e\j\s\t\i\b\c\6\l\y\5\w\b\6\z\s\i\n\k\h\f\y\r\k\u\e\l\4\o\s\3\2\y\1\a\c\1\g\f\9\b\h\j\1\7\6\c\1\g\2\f\d\y\8\5\0\d\f\r\x\g\z\n\y\k\3\b\u\r\m\6\b\i\8\g\t\b\0\5\c\z\u\k\n\o\t\o\y\z\c\6\f\8\5\0 ]] 00:41:09.523 19:08:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:09.523 19:08:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:09.523 [2024-07-25 19:08:09.974107] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:09.523 [2024-07-25 19:08:09.974347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166054 ] 00:41:09.782 [2024-07-25 19:08:10.161033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.782 [2024-07-25 19:08:10.358047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:11.731  Copying: 512/512 [B] (average 250 kBps) 00:41:11.731 00:41:11.731 ************************************ 00:41:11.731 END TEST dd_flags_misc_forced_aio 00:41:11.731 ************************************ 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 75bfdhdrhtnr4ln84rs8ydl89hxbgxwxrkk4ngakbw9ujk426eax7rtvhsunidl0t1lpzq8vsqa7m0jyc20xluzwdws303b0d2kjx8t2na8vjh8v60qax8b80eek4spwxge8s224gxjsmp66jtg24mva4i95eu8r6zg3tm44hyes26rxqit6mouvyr2neg8dbhibnfo24aft14xcz7ou4rbxpchg6jud6dmawzjkzo3n8cndvuaqwjyd0hexudcd515r65sfvqmsd32kisdmuxflbfp1pvtub7354a5dum1eornxgv7jz3dvay82kygnnfsyee1srpr9zj2f6kwqlrmwx40p9hvv4uqh8u5sfgf9frrt20e65mkg4ztkb5wwc3t5cwp3q5pct80irek8viejbs8ptqnj2ejstibc6ly5wb6zsinkhfyrkuel4os32y1ac1gf9bhj176c1g2fdy850dfrxgznyk3burm6bi8gtb05czuknotoyzc6f850 == \7\5\b\f\d\h\d\r\h\t\n\r\4\l\n\8\4\r\s\8\y\d\l\8\9\h\x\b\g\x\w\x\r\k\k\4\n\g\a\k\b\w\9\u\j\k\4\2\6\e\a\x\7\r\t\v\h\s\u\n\i\d\l\0\t\1\l\p\z\q\8\v\s\q\a\7\m\0\j\y\c\2\0\x\l\u\z\w\d\w\s\3\0\3\b\0\d\2\k\j\x\8\t\2\n\a\8\v\j\h\8\v\6\0\q\a\x\8\b\8\0\e\e\k\4\s\p\w\x\g\e\8\s\2\2\4\g\x\j\s\m\p\6\6\j\t\g\2\4\m\v\a\4\i\9\5\e\u\8\r\6\z\g\3\t\m\4\4\h\y\e\s\2\6\r\x\q\i\t\6\m\o\u\v\y\r\2\n\e\g\8\d\b\h\i\b\n\f\o\2\4\a\f\t\1\4\x\c\z\7\o\u\4\r\b\x\p\c\h\g\6\j\u\d\6\d\m\a\w\z\j\k\z\o\3\n\8\c\n\d\v\u\a\q\w\j\y\d\0\h\e\x\u\d\c\d\5\1\5\r\6\5\s\f\v\q\m\s\d\3\2\k\i\s\d\m\u\x\f\l\b\f\p\1\p\v\t\u\b\7\3\5\4\a\5\d\u\m\1\e\o\r\n\x\g\v\7\j\z\3\d\v\a\y\8\2\k\y\g\n\n\f\s\y\e\e\1\s\r\p\r\9\z\j\2\f\6\k\w\q\l\r\m\w\x\4\0\p\9\h\v\v\4\u\q\h\8\u\5\s\f\g\f\9\f\r\r\t\2\0\e\6\5\m\k\g\4\z\t\k\b\5\w\w\c\3\t\5\c\w\p\3\q\5\p\c\t\8\0\i\r\e\k\8\v\i\e\j\b\s\8\p\t\q\n\j\2\e\j\s\t\i\b\c\6\l\y\5\w\b\6\z\s\i\n\k\h\f\y\r\k\u\e\l\4\o\s\3\2\y\1\a\c\1\g\f\9\b\h\j\1\7\6\c\1\g\2\f\d\y\8\5\0\d\f\r\x\g\z\n\y\k\3\b\u\r\m\6\b\i\8\g\t\b\0\5\c\z\u\k\n\o\t\o\y\z\c\6\f\8\5\0 ]] 00:41:11.732 00:41:11.732 real 0m16.096s 00:41:11.732 user 0m13.006s 00:41:11.732 sys 0m1.976s 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:11.732 00:41:11.732 real 1m9.000s 00:41:11.732 user 0m54.075s 00:41:11.732 sys 0m8.796s 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:11.732 19:08:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:11.732 ************************************ 00:41:11.732 END TEST spdk_dd_posix 00:41:11.732 ************************************ 00:41:11.732 19:08:12 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:41:11.732 19:08:12 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:11.732 19:08:12 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:11.732 19:08:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:11.732 ************************************ 00:41:11.732 START TEST spdk_dd_malloc 00:41:11.732 ************************************ 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:41:11.732 * Looking for test storage... 00:41:11.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:41:11.732 ************************************ 00:41:11.732 START TEST dd_malloc_copy 00:41:11.732 ************************************ 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:11.732 19:08:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:41:11.732 { 00:41:11.732 "subsystems": [ 00:41:11.732 { 00:41:11.732 "subsystem": "bdev", 00:41:11.732 "config": [ 00:41:11.732 { 00:41:11.732 "params": { 00:41:11.732 "block_size": 512, 00:41:11.732 "num_blocks": 1048576, 00:41:11.732 "name": "malloc0" 00:41:11.732 }, 00:41:11.732 "method": "bdev_malloc_create" 00:41:11.732 }, 00:41:11.732 { 00:41:11.732 "params": { 00:41:11.732 "block_size": 512, 00:41:11.732 "num_blocks": 1048576, 00:41:11.732 "name": "malloc1" 00:41:11.732 }, 00:41:11.732 "method": "bdev_malloc_create" 00:41:11.732 }, 00:41:11.732 { 00:41:11.732 "method": "bdev_wait_for_examine" 00:41:11.732 } 00:41:11.732 ] 00:41:11.732 } 00:41:11.732 ] 00:41:11.732 } 00:41:11.732 [2024-07-25 19:08:12.259685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:11.732 [2024-07-25 19:08:12.259893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166155 ] 00:41:11.992 [2024-07-25 19:08:12.441928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.251 [2024-07-25 19:08:12.632439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.910  Copying: 242/512 [MB] (242 MBps) Copying: 485/512 [MB] (243 MBps) Copying: 512/512 [MB] (average 242 MBps) 00:41:19.910 00:41:19.910 19:08:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:41:19.910 19:08:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:41:19.910 19:08:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:19.910 19:08:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:41:19.910 { 00:41:19.910 "subsystems": [ 00:41:19.910 { 00:41:19.910 "subsystem": "bdev", 00:41:19.910 "config": [ 00:41:19.910 { 00:41:19.910 "params": { 00:41:19.910 "block_size": 512, 00:41:19.910 "num_blocks": 1048576, 00:41:19.910 "name": "malloc0" 00:41:19.910 }, 00:41:19.910 "method": "bdev_malloc_create" 00:41:19.910 }, 00:41:19.910 { 00:41:19.910 "params": { 00:41:19.910 "block_size": 512, 00:41:19.910 "num_blocks": 1048576, 00:41:19.910 "name": "malloc1" 00:41:19.910 }, 00:41:19.910 "method": "bdev_malloc_create" 00:41:19.910 }, 00:41:19.910 { 00:41:19.910 "method": "bdev_wait_for_examine" 00:41:19.910 } 00:41:19.910 ] 00:41:19.910 } 00:41:19.910 ] 00:41:19.910 } 00:41:19.910 [2024-07-25 19:08:20.196812] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:19.910 [2024-07-25 19:08:20.197043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166249 ] 00:41:19.910 [2024-07-25 19:08:20.373395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.169 [2024-07-25 19:08:20.566522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.826  Copying: 241/512 [MB] (241 MBps) Copying: 484/512 [MB] (242 MBps) Copying: 512/512 [MB] (average 242 MBps) 00:41:27.826 00:41:27.826 00:41:27.826 real 0m15.855s 00:41:27.826 user 0m14.584s 00:41:27.826 sys 0m1.100s 00:41:27.826 19:08:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:27.826 19:08:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:41:27.826 ************************************ 00:41:27.826 END TEST dd_malloc_copy 00:41:27.826 ************************************ 00:41:27.826 00:41:27.826 real 0m16.031s 00:41:27.826 user 0m14.678s 00:41:27.826 sys 0m1.195s 00:41:27.826 19:08:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:27.826 19:08:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:41:27.826 ************************************ 00:41:27.826 END TEST spdk_dd_malloc 00:41:27.826 ************************************ 00:41:27.826 19:08:28 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:41:27.826 19:08:28 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:27.826 19:08:28 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:27.826 19:08:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:27.826 ************************************ 00:41:27.826 START TEST spdk_dd_bdev_to_bdev 00:41:27.826 ************************************ 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:41:27.826 * Looking for test storage... 00:41:27.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:27.826 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:41:27.827 19:08:28 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:41:27.827 [2024-07-25 19:08:28.361955] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:27.827 [2024-07-25 19:08:28.362180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166406 ] 00:41:28.086 [2024-07-25 19:08:28.546077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.346 [2024-07-25 19:08:28.742465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.295  Copying: 256/256 [MB] (average 1248 MBps) 00:41:30.295 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:30.295 ************************************ 00:41:30.295 START TEST dd_inflate_file 00:41:30.295 ************************************ 00:41:30.295 19:08:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:41:30.295 [2024-07-25 19:08:30.580711] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:30.295 [2024-07-25 19:08:30.580921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166441 ] 00:41:30.295 [2024-07-25 19:08:30.761849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.555 [2024-07-25 19:08:30.957913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.195  Copying: 64/64 [MB] (average 1280 MBps) 00:41:32.195 00:41:32.195 00:41:32.195 real 0m2.061s 00:41:32.195 user 0m1.636s 00:41:32.195 sys 0m0.292s 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:41:32.195 ************************************ 00:41:32.195 END TEST dd_inflate_file 00:41:32.195 ************************************ 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:32.195 ************************************ 00:41:32.195 START TEST dd_copy_to_out_bdev 00:41:32.195 ************************************ 00:41:32.195 19:08:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:41:32.195 { 00:41:32.195 "subsystems": [ 00:41:32.195 { 00:41:32.195 "subsystem": "bdev", 00:41:32.195 "config": [ 00:41:32.195 { 00:41:32.195 "params": { 00:41:32.195 "block_size": 4096, 00:41:32.195 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:32.195 "name": "aio1" 00:41:32.195 }, 00:41:32.195 "method": "bdev_aio_create" 00:41:32.195 }, 00:41:32.195 { 00:41:32.195 "params": { 00:41:32.195 "trtype": "pcie", 00:41:32.195 "traddr": "0000:00:10.0", 00:41:32.195 "name": "Nvme0" 00:41:32.195 }, 00:41:32.195 "method": "bdev_nvme_attach_controller" 00:41:32.195 }, 00:41:32.195 { 00:41:32.195 "method": "bdev_wait_for_examine" 00:41:32.195 } 00:41:32.195 ] 00:41:32.195 } 00:41:32.195 ] 00:41:32.195 } 00:41:32.195 [2024-07-25 19:08:32.697608] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:32.195 [2024-07-25 19:08:32.697847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166502 ] 00:41:32.455 [2024-07-25 19:08:32.877240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.715 [2024-07-25 19:08:33.066476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.095  Copying: 64/64 [MB] (average 81 MBps) 00:41:35.096 00:41:35.096 00:41:35.096 real 0m2.924s 00:41:35.096 user 0m2.501s 00:41:35.096 sys 0m0.312s 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:35.096 ************************************ 00:41:35.096 END TEST dd_copy_to_out_bdev 00:41:35.096 ************************************ 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:35.096 ************************************ 00:41:35.096 START TEST dd_offset_magic 00:41:35.096 ************************************ 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:35.096 19:08:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:35.355 [2024-07-25 19:08:35.676369] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:35.355 [2024-07-25 19:08:35.676642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166562 ] 00:41:35.355 { 00:41:35.355 "subsystems": [ 00:41:35.355 { 00:41:35.355 "subsystem": "bdev", 00:41:35.355 "config": [ 00:41:35.355 { 00:41:35.355 "params": { 00:41:35.355 "block_size": 4096, 00:41:35.355 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:35.355 "name": "aio1" 00:41:35.355 }, 00:41:35.355 "method": "bdev_aio_create" 00:41:35.355 }, 00:41:35.355 { 00:41:35.355 "params": { 00:41:35.355 "trtype": "pcie", 00:41:35.355 "traddr": "0000:00:10.0", 00:41:35.355 "name": "Nvme0" 00:41:35.355 }, 00:41:35.355 "method": "bdev_nvme_attach_controller" 00:41:35.355 }, 00:41:35.355 { 00:41:35.355 "method": "bdev_wait_for_examine" 00:41:35.355 } 00:41:35.355 ] 00:41:35.355 } 00:41:35.355 ] 00:41:35.355 } 00:41:35.355 [2024-07-25 19:08:35.834417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:35.615 [2024-07-25 19:08:36.022315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:37.935  Copying: 65/65 [MB] (average 140 MBps) 00:41:37.935 00:41:37.935 19:08:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:41:37.935 19:08:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:41:37.935 19:08:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:37.935 19:08:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:37.935 { 00:41:37.935 "subsystems": [ 00:41:37.935 { 00:41:37.935 "subsystem": "bdev", 00:41:37.935 "config": [ 00:41:37.935 { 00:41:37.935 "params": { 00:41:37.935 "block_size": 4096, 00:41:37.935 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:37.935 "name": "aio1" 00:41:37.935 }, 00:41:37.935 "method": "bdev_aio_create" 00:41:37.935 }, 00:41:37.935 { 00:41:37.935 "params": { 00:41:37.935 "trtype": "pcie", 00:41:37.935 "traddr": "0000:00:10.0", 00:41:37.935 "name": "Nvme0" 00:41:37.935 }, 00:41:37.935 "method": "bdev_nvme_attach_controller" 00:41:37.935 }, 00:41:37.935 { 00:41:37.935 "method": "bdev_wait_for_examine" 00:41:37.935 } 00:41:37.935 ] 00:41:37.935 } 00:41:37.935 ] 00:41:37.935 } 00:41:37.935 [2024-07-25 19:08:38.489977] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:37.935 [2024-07-25 19:08:38.490195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166599 ] 00:41:38.194 [2024-07-25 19:08:38.667934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:38.454 [2024-07-25 19:08:38.855172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.095  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:40.095 00:41:40.095 19:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:41:40.095 19:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:41:40.095 19:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:41:40.095 19:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:41:40.095 19:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:41:40.095 19:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:40.095 19:08:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:40.095 { 00:41:40.095 "subsystems": [ 00:41:40.095 { 00:41:40.095 "subsystem": "bdev", 00:41:40.095 "config": [ 00:41:40.095 { 00:41:40.095 "params": { 00:41:40.095 "block_size": 4096, 00:41:40.095 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:40.095 "name": "aio1" 00:41:40.095 }, 00:41:40.095 "method": "bdev_aio_create" 00:41:40.095 }, 00:41:40.095 { 00:41:40.095 "params": { 00:41:40.095 "trtype": "pcie", 00:41:40.095 "traddr": "0000:00:10.0", 00:41:40.095 "name": "Nvme0" 00:41:40.095 }, 00:41:40.095 "method": "bdev_nvme_attach_controller" 00:41:40.095 }, 00:41:40.095 { 00:41:40.095 "method": "bdev_wait_for_examine" 00:41:40.095 } 00:41:40.095 ] 00:41:40.095 } 00:41:40.095 ] 00:41:40.095 } 00:41:40.095 [2024-07-25 19:08:40.670563] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:40.095 [2024-07-25 19:08:40.670791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166636 ] 00:41:40.354 [2024-07-25 19:08:40.850081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.614 [2024-07-25 19:08:41.046005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.933  Copying: 65/65 [MB] (average 171 MBps) 00:41:42.933 00:41:42.933 19:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:41:42.933 19:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:41:42.934 19:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:42.934 19:08:43 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:42.934 { 00:41:42.934 "subsystems": [ 00:41:42.934 { 00:41:42.934 "subsystem": "bdev", 00:41:42.934 "config": [ 00:41:42.934 { 00:41:42.934 "params": { 00:41:42.934 "block_size": 4096, 00:41:42.934 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:42.934 "name": "aio1" 00:41:42.934 }, 00:41:42.934 "method": "bdev_aio_create" 00:41:42.934 }, 00:41:42.934 { 00:41:42.934 "params": { 00:41:42.934 "trtype": "pcie", 00:41:42.934 "traddr": "0000:00:10.0", 00:41:42.934 "name": "Nvme0" 00:41:42.934 }, 00:41:42.934 "method": "bdev_nvme_attach_controller" 00:41:42.934 }, 00:41:42.934 { 00:41:42.934 "method": "bdev_wait_for_examine" 00:41:42.935 } 00:41:42.935 ] 00:41:42.935 } 00:41:42.935 ] 00:41:42.935 } 00:41:42.935 [2024-07-25 19:08:43.199470] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:42.935 [2024-07-25 19:08:43.199702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166672 ] 00:41:42.935 [2024-07-25 19:08:43.380986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:43.195 [2024-07-25 19:08:43.587448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.142  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:45.142 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:41:45.142 00:41:45.142 real 0m9.708s 00:41:45.142 user 0m7.337s 00:41:45.142 sys 0m1.188s 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:45.142 ************************************ 00:41:45.142 END TEST dd_offset_magic 00:41:45.142 ************************************ 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:45.142 19:08:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:45.142 { 00:41:45.142 "subsystems": [ 00:41:45.142 { 00:41:45.142 "subsystem": "bdev", 00:41:45.142 "config": [ 00:41:45.142 { 00:41:45.142 "params": { 00:41:45.142 "block_size": 4096, 00:41:45.142 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:45.142 "name": "aio1" 00:41:45.142 }, 00:41:45.142 "method": "bdev_aio_create" 00:41:45.142 }, 00:41:45.142 { 00:41:45.142 "params": { 00:41:45.142 "trtype": "pcie", 00:41:45.142 "traddr": "0000:00:10.0", 00:41:45.142 "name": "Nvme0" 00:41:45.142 }, 00:41:45.142 "method": "bdev_nvme_attach_controller" 00:41:45.142 }, 00:41:45.142 { 00:41:45.142 "method": "bdev_wait_for_examine" 00:41:45.142 } 00:41:45.142 ] 00:41:45.142 } 00:41:45.142 ] 00:41:45.142 } 00:41:45.142 [2024-07-25 19:08:45.462030] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:45.142 [2024-07-25 19:08:45.462242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166721 ] 00:41:45.142 [2024-07-25 19:08:45.640479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.402 [2024-07-25 19:08:45.839190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.344  Copying: 5120/5120 [kB] (average 1000 MBps) 00:41:47.344 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:47.344 19:08:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:47.344 { 00:41:47.344 "subsystems": [ 00:41:47.344 { 00:41:47.344 "subsystem": "bdev", 00:41:47.344 "config": [ 00:41:47.344 { 00:41:47.344 "params": { 00:41:47.344 "block_size": 4096, 00:41:47.344 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:47.344 "name": "aio1" 00:41:47.344 }, 00:41:47.344 "method": "bdev_aio_create" 00:41:47.344 }, 00:41:47.344 { 00:41:47.344 "params": { 00:41:47.344 "trtype": "pcie", 00:41:47.344 "traddr": "0000:00:10.0", 00:41:47.344 "name": "Nvme0" 00:41:47.344 }, 00:41:47.344 "method": "bdev_nvme_attach_controller" 00:41:47.344 }, 00:41:47.344 { 00:41:47.344 "method": "bdev_wait_for_examine" 00:41:47.344 } 00:41:47.344 ] 00:41:47.344 } 00:41:47.344 ] 00:41:47.344 } 00:41:47.344 [2024-07-25 19:08:47.623288] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:47.344 [2024-07-25 19:08:47.623491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166754 ] 00:41:47.344 [2024-07-25 19:08:47.802004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:47.605 [2024-07-25 19:08:47.992188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:49.549  Copying: 5120/5120 [kB] (average 250 MBps) 00:41:49.549 00:41:49.549 19:08:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:41:49.549 00:41:49.549 real 0m21.666s 00:41:49.549 user 0m16.872s 00:41:49.549 sys 0m2.942s 00:41:49.549 19:08:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:49.549 19:08:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:49.549 ************************************ 00:41:49.549 END TEST spdk_dd_bdev_to_bdev 00:41:49.549 ************************************ 00:41:49.549 19:08:49 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:41:49.549 19:08:49 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:49.549 19:08:49 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:49.549 19:08:49 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:49.549 19:08:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:49.549 ************************************ 00:41:49.549 START TEST spdk_dd_sparse 00:41:49.549 ************************************ 00:41:49.549 19:08:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:49.549 * Looking for test storage... 00:41:49.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:49.549 19:08:50 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:49.549 19:08:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:49.549 19:08:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:41:49.550 1+0 records in 00:41:49.550 1+0 records out 00:41:49.550 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00791172 s, 530 MB/s 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:41:49.550 1+0 records in 00:41:49.550 1+0 records out 00:41:49.550 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00955357 s, 439 MB/s 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:41:49.550 1+0 records in 00:41:49.550 1+0 records out 00:41:49.550 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00915465 s, 458 MB/s 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:49.550 ************************************ 00:41:49.550 START TEST dd_sparse_file_to_file 00:41:49.550 ************************************ 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:41:49.550 19:08:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:49.809 { 00:41:49.809 "subsystems": [ 00:41:49.809 { 00:41:49.809 "subsystem": "bdev", 00:41:49.809 "config": [ 00:41:49.809 { 00:41:49.809 "params": { 00:41:49.809 "block_size": 4096, 00:41:49.809 "filename": "dd_sparse_aio_disk", 00:41:49.809 "name": "dd_aio" 00:41:49.809 }, 00:41:49.809 "method": "bdev_aio_create" 00:41:49.809 }, 00:41:49.809 { 00:41:49.809 "params": { 00:41:49.809 "lvs_name": "dd_lvstore", 00:41:49.809 "bdev_name": "dd_aio" 00:41:49.809 }, 00:41:49.809 "method": "bdev_lvol_create_lvstore" 00:41:49.809 }, 00:41:49.809 { 00:41:49.809 "method": "bdev_wait_for_examine" 00:41:49.809 } 00:41:49.809 ] 00:41:49.809 } 00:41:49.809 ] 00:41:49.809 } 00:41:49.809 [2024-07-25 19:08:50.153395] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:49.809 [2024-07-25 19:08:50.153620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166845 ] 00:41:49.809 [2024-07-25 19:08:50.332836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:50.068 [2024-07-25 19:08:50.520305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.017  Copying: 12/36 [MB] (average 1000 MBps) 00:41:52.017 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:41:52.017 00:41:52.017 real 0m2.264s 00:41:52.017 user 0m1.823s 00:41:52.017 sys 0m0.295s 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:52.017 ************************************ 00:41:52.017 END TEST dd_sparse_file_to_file 00:41:52.017 ************************************ 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:52.017 ************************************ 00:41:52.017 START TEST dd_sparse_file_to_bdev 00:41:52.017 ************************************ 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:52.017 19:08:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:52.017 { 00:41:52.017 "subsystems": [ 00:41:52.017 { 00:41:52.017 "subsystem": "bdev", 00:41:52.017 "config": [ 00:41:52.017 { 00:41:52.017 "params": { 00:41:52.017 "block_size": 4096, 00:41:52.017 "filename": "dd_sparse_aio_disk", 00:41:52.017 "name": "dd_aio" 00:41:52.017 }, 00:41:52.017 "method": "bdev_aio_create" 00:41:52.017 }, 00:41:52.017 { 00:41:52.017 "params": { 00:41:52.017 "lvs_name": "dd_lvstore", 00:41:52.017 "lvol_name": "dd_lvol", 00:41:52.017 "size_in_mib": 36, 00:41:52.017 "thin_provision": true 00:41:52.017 }, 00:41:52.017 "method": "bdev_lvol_create" 00:41:52.017 }, 00:41:52.017 { 00:41:52.017 "method": "bdev_wait_for_examine" 00:41:52.017 } 00:41:52.017 ] 00:41:52.017 } 00:41:52.017 ] 00:41:52.017 } 00:41:52.017 [2024-07-25 19:08:52.477363] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:52.017 [2024-07-25 19:08:52.478323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166910 ] 00:41:52.276 [2024-07-25 19:08:52.657992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:52.276 [2024-07-25 19:08:52.855947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.235  Copying: 12/36 [MB] (average 521 MBps) 00:41:54.235 00:41:54.235 00:41:54.235 real 0m2.200s 00:41:54.235 user 0m1.814s 00:41:54.235 sys 0m0.281s 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:54.235 ************************************ 00:41:54.235 END TEST dd_sparse_file_to_bdev 00:41:54.235 ************************************ 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:54.235 ************************************ 00:41:54.235 START TEST dd_sparse_bdev_to_file 00:41:54.235 ************************************ 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:41:54.235 19:08:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:54.235 { 00:41:54.235 "subsystems": [ 00:41:54.235 { 00:41:54.235 "subsystem": "bdev", 00:41:54.235 "config": [ 00:41:54.235 { 00:41:54.235 "params": { 00:41:54.235 "block_size": 4096, 00:41:54.235 "filename": "dd_sparse_aio_disk", 00:41:54.235 "name": "dd_aio" 00:41:54.235 }, 00:41:54.235 "method": "bdev_aio_create" 00:41:54.235 }, 00:41:54.235 { 00:41:54.235 "method": "bdev_wait_for_examine" 00:41:54.235 } 00:41:54.235 ] 00:41:54.235 } 00:41:54.235 ] 00:41:54.235 } 00:41:54.235 [2024-07-25 19:08:54.741633] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:54.235 [2024-07-25 19:08:54.742614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166971 ] 00:41:54.494 [2024-07-25 19:08:54.925318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:54.753 [2024-07-25 19:08:55.117092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.388  Copying: 12/36 [MB] (average 1000 MBps) 00:41:56.388 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:41:56.388 00:41:56.388 real 0m2.201s 00:41:56.388 user 0m1.814s 00:41:56.388 sys 0m0.283s 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:56.388 ************************************ 00:41:56.388 END TEST dd_sparse_bdev_to_file 00:41:56.388 ************************************ 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:41:56.388 00:41:56.388 real 0m7.064s 00:41:56.388 user 0m5.598s 00:41:56.388 sys 0m1.125s 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:56.388 19:08:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:56.388 ************************************ 00:41:56.388 END TEST spdk_dd_sparse 00:41:56.388 ************************************ 00:41:56.648 19:08:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:41:56.648 19:08:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:56.648 19:08:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:56.648 19:08:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:56.648 ************************************ 00:41:56.648 START TEST spdk_dd_negative 00:41:56.648 ************************************ 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:41:56.648 * Looking for test storage... 00:41:56.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:56.648 ************************************ 00:41:56.648 START TEST dd_invalid_arguments 00:41:56.648 ************************************ 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.648 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.649 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.649 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.649 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.649 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:56.649 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:41:56.909 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:41:56.909 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:41:56.909 00:41:56.909 CPU options: 00:41:56.909 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:41:56.909 (like [0,1,10]) 00:41:56.909 --lcores lcore to CPU mapping list. The list is in the format: 00:41:56.909 [<,lcores[@CPUs]>...] 00:41:56.909 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:41:56.909 Within the group, '-' is used for range separator, 00:41:56.909 ',' is used for single number separator. 00:41:56.909 '( )' can be omitted for single element group, 00:41:56.909 '@' can be omitted if cpus and lcores have the same value 00:41:56.909 --disable-cpumask-locks Disable CPU core lock files. 00:41:56.909 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:41:56.909 pollers in the app support interrupt mode) 00:41:56.909 -p, --main-core main (primary) core for DPDK 00:41:56.909 00:41:56.909 Configuration options: 00:41:56.909 -c, --config, --json JSON config file 00:41:56.909 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:41:56.909 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:41:56.909 --wait-for-rpc wait for RPCs to initialize subsystems 00:41:56.909 --rpcs-allowed comma-separated list of permitted RPCS 00:41:56.909 --json-ignore-init-errors don't exit on invalid config entry 00:41:56.909 00:41:56.909 Memory options: 00:41:56.909 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:41:56.909 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:41:56.909 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:41:56.909 -R, --huge-unlink unlink huge files after initialization 00:41:56.909 -n, --mem-channels number of memory channels used for DPDK 00:41:56.909 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:41:56.909 --msg-mempool-size global message memory pool size in count (default: 262143) 00:41:56.909 --no-huge run without using hugepages 00:41:56.909 -i, --shm-id shared memory ID (optional) 00:41:56.909 -g, --single-file-segments force creating just one hugetlbfs file 00:41:56.909 00:41:56.909 PCI options: 00:41:56.909 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:41:56.909 -B, --pci-blocked pci addr to block (can be used more than once) 00:41:56.909 -u, --no-pci disable PCI access 00:41:56.909 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:41:56.909 00:41:56.909 Log options: 00:41:56.909 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:41:56.909 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:41:56.909 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:41:56.909 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:41:56.909 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:41:56.909 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:41:56.909 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:41:56.909 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:41:56.909 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:41:56.909 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:41:56.909 virtio_vfio_user, vmd) 00:41:56.909 --silence-noticelog disable notice level logging to stderr 00:41:56.909 00:41:56.909 Trace options: 00:41:56.909 --num-trace-entries number of trace entries for each core, must be power of 2, 00:41:56.909 setting 0 to disable trace (default 32768) 00:41:56.909 Tracepoints vary in size and can use more than one trace entry. 00:41:56.909 -e, --tpoint-group [:] 00:41:56.909 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:41:56.909 [2024-07-25 19:08:57.252361] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:41:56.909 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:41:56.909 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:41:56.909 a tracepoint group. First tpoint inside a group can be enabled by 00:41:56.909 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:41:56.909 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:41:56.909 in /include/spdk_internal/trace_defs.h 00:41:56.909 00:41:56.909 Other options: 00:41:56.909 -h, --help show this usage 00:41:56.909 -v, --version print SPDK version 00:41:56.909 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:41:56.909 --env-context Opaque context for use of the env implementation 00:41:56.909 00:41:56.909 Application specific: 00:41:56.909 [--------- DD Options ---------] 00:41:56.909 --if Input file. Must specify either --if or --ib. 00:41:56.909 --ib Input bdev. Must specifier either --if or --ib 00:41:56.909 --of Output file. Must specify either --of or --ob. 00:41:56.909 --ob Output bdev. Must specify either --of or --ob. 00:41:56.909 --iflag Input file flags. 00:41:56.909 --oflag Output file flags. 00:41:56.909 --bs I/O unit size (default: 4096) 00:41:56.909 --qd Queue depth (default: 2) 00:41:56.909 --count I/O unit count. The number of I/O units to copy. (default: all) 00:41:56.909 --skip Skip this many I/O units at start of input. (default: 0) 00:41:56.909 --seek Skip this many I/O units at start of output. (default: 0) 00:41:56.909 --aio Force usage of AIO. (by default io_uring is used if available) 00:41:56.909 --sparse Enable hole skipping in input target 00:41:56.909 Available iflag and oflag values: 00:41:56.909 append - append mode 00:41:56.909 direct - use direct I/O for data 00:41:56.909 directory - fail unless a directory 00:41:56.909 dsync - use synchronized I/O for data 00:41:56.909 noatime - do not update access time 00:41:56.909 noctty - do not assign controlling terminal from file 00:41:56.909 nofollow - do not follow symlinks 00:41:56.909 nonblock - use non-blocking I/O 00:41:56.909 sync - use synchronized I/O for data and metadata 00:41:56.909 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:41:56.909 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:56.909 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:56.909 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:56.910 00:41:56.910 real 0m0.162s 00:41:56.910 user 0m0.093s 00:41:56.910 sys 0m0.064s 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:41:56.910 ************************************ 00:41:56.910 END TEST dd_invalid_arguments 00:41:56.910 ************************************ 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:56.910 ************************************ 00:41:56.910 START TEST dd_double_input 00:41:56.910 ************************************ 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:56.910 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:41:56.910 [2024-07-25 19:08:57.457224] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:57.170 00:41:57.170 real 0m0.148s 00:41:57.170 user 0m0.065s 00:41:57.170 sys 0m0.082s 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:41:57.170 ************************************ 00:41:57.170 END TEST dd_double_input 00:41:57.170 ************************************ 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:57.170 ************************************ 00:41:57.170 START TEST dd_double_output 00:41:57.170 ************************************ 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:41:57.170 [2024-07-25 19:08:57.675661] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:57.170 00:41:57.170 real 0m0.142s 00:41:57.170 user 0m0.071s 00:41:57.170 sys 0m0.072s 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.170 ************************************ 00:41:57.170 END TEST dd_double_output 00:41:57.170 19:08:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:41:57.170 ************************************ 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:57.430 ************************************ 00:41:57.430 START TEST dd_no_input 00:41:57.430 ************************************ 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:41:57.430 [2024-07-25 19:08:57.887142] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:57.430 00:41:57.430 real 0m0.145s 00:41:57.430 user 0m0.089s 00:41:57.430 sys 0m0.057s 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.430 ************************************ 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:41:57.430 END TEST dd_no_input 00:41:57.430 ************************************ 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:57.430 19:08:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:57.690 ************************************ 00:41:57.690 START TEST dd_no_output 00:41:57.690 ************************************ 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:57.690 [2024-07-25 19:08:58.105035] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:57.690 00:41:57.690 real 0m0.151s 00:41:57.690 user 0m0.085s 00:41:57.690 sys 0m0.066s 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:41:57.690 ************************************ 00:41:57.690 END TEST dd_no_output 00:41:57.690 ************************************ 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:57.690 ************************************ 00:41:57.690 START TEST dd_wrong_blocksize 00:41:57.690 ************************************ 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:57.690 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:41:57.950 [2024-07-25 19:08:58.328530] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:57.950 00:41:57.950 real 0m0.149s 00:41:57.950 user 0m0.073s 00:41:57.950 sys 0m0.077s 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:41:57.950 ************************************ 00:41:57.950 END TEST dd_wrong_blocksize 00:41:57.950 ************************************ 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:41:57.950 ************************************ 00:41:57.950 START TEST dd_smaller_blocksize 00:41:57.950 ************************************ 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:57.950 19:08:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:41:58.210 [2024-07-25 19:08:58.553738] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:58.210 [2024-07-25 19:08:58.554012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167245 ] 00:41:58.210 [2024-07-25 19:08:58.736847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.469 [2024-07-25 19:08:59.001266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:59.036 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:41:59.295 [2024-07-25 19:08:59.675885] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:41:59.295 [2024-07-25 19:08:59.676207] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:00.241 [2024-07-25 19:09:00.475975] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:00.528 00:42:00.528 real 0m2.438s 00:42:00.528 user 0m1.834s 00:42:00.528 sys 0m0.502s 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.528 19:09:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:42:00.528 ************************************ 00:42:00.528 END TEST dd_smaller_blocksize 00:42:00.528 ************************************ 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:00.529 ************************************ 00:42:00.529 START TEST dd_invalid_count 00:42:00.529 ************************************ 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:00.529 19:09:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:00.529 [2024-07-25 19:09:01.048562] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:42:00.529 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:42:00.529 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:00.529 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:00.788 00:42:00.788 real 0m0.141s 00:42:00.788 user 0m0.051s 00:42:00.788 sys 0m0.089s 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:42:00.788 ************************************ 00:42:00.788 END TEST dd_invalid_count 00:42:00.788 ************************************ 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:00.788 ************************************ 00:42:00.788 START TEST dd_invalid_oflag 00:42:00.788 ************************************ 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:00.788 [2024-07-25 19:09:01.256230] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:00.788 00:42:00.788 real 0m0.143s 00:42:00.788 user 0m0.072s 00:42:00.788 sys 0m0.071s 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:00.788 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:42:00.788 ************************************ 00:42:00.788 END TEST dd_invalid_oflag 00:42:00.788 ************************************ 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:01.047 ************************************ 00:42:01.047 START TEST dd_invalid_iflag 00:42:01.047 ************************************ 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:01.047 [2024-07-25 19:09:01.459205] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:01.047 00:42:01.047 real 0m0.117s 00:42:01.047 user 0m0.061s 00:42:01.047 sys 0m0.057s 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:42:01.047 ************************************ 00:42:01.047 END TEST dd_invalid_iflag 00:42:01.047 ************************************ 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:01.047 ************************************ 00:42:01.047 START TEST dd_unknown_flag 00:42:01.047 ************************************ 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:01.047 19:09:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:01.306 [2024-07-25 19:09:01.661110] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:01.306 [2024-07-25 19:09:01.661345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167383 ] 00:42:01.306 [2024-07-25 19:09:01.845805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:01.563 [2024-07-25 19:09:02.037010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:01.821 [2024-07-25 19:09:02.358757] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:42:01.821 [2024-07-25 19:09:02.358852] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:01.821  Copying: 0/0 [B] (average 0 Bps)[2024-07-25 19:09:02.359012] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:42:02.755 [2024-07-25 19:09:03.124622] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:03.013 00:42:03.013 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:03.271 00:42:03.271 real 0m2.048s 00:42:03.271 user 0m1.643s 00:42:03.271 sys 0m0.252s 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:42:03.271 ************************************ 00:42:03.271 END TEST dd_unknown_flag 00:42:03.271 ************************************ 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:03.271 ************************************ 00:42:03.271 START TEST dd_invalid_json 00:42:03.271 ************************************ 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:03.271 19:09:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:03.271 [2024-07-25 19:09:03.772303] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:03.271 [2024-07-25 19:09:03.772542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167432 ] 00:42:03.529 [2024-07-25 19:09:03.956590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:03.787 [2024-07-25 19:09:04.156230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:03.787 [2024-07-25 19:09:04.156315] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:42:03.787 [2024-07-25 19:09:04.156368] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:03.787 [2024-07-25 19:09:04.156399] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:03.787 [2024-07-25 19:09:04.156456] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:04.046 00:42:04.046 real 0m0.890s 00:42:04.046 user 0m0.635s 00:42:04.046 sys 0m0.158s 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.046 19:09:04 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:42:04.046 ************************************ 00:42:04.046 END TEST dd_invalid_json 00:42:04.046 ************************************ 00:42:04.305 00:42:04.305 real 0m7.616s 00:42:04.305 user 0m5.204s 00:42:04.305 sys 0m2.075s 00:42:04.305 19:09:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.305 19:09:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:04.305 ************************************ 00:42:04.305 END TEST spdk_dd_negative 00:42:04.305 ************************************ 00:42:04.305 00:42:04.305 real 2m54.787s 00:42:04.305 user 2m19.296s 00:42:04.305 sys 0m25.079s 00:42:04.305 19:09:04 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.305 19:09:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:04.305 ************************************ 00:42:04.305 END TEST spdk_dd 00:42:04.305 ************************************ 00:42:04.305 19:09:04 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:42:04.305 19:09:04 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:04.305 19:09:04 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:04.305 19:09:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:04.305 19:09:04 -- common/autotest_common.sh@10 -- # set +x 00:42:04.305 ************************************ 00:42:04.305 START TEST blockdev_nvme 00:42:04.305 ************************************ 00:42:04.305 19:09:04 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:04.305 * Looking for test storage... 00:42:04.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:04.305 19:09:04 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=167530 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 167530 00:42:04.305 19:09:04 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:04.305 19:09:04 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 167530 ']' 00:42:04.305 19:09:04 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:04.305 19:09:04 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:04.305 19:09:04 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:04.305 19:09:04 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:04.305 19:09:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:04.563 [2024-07-25 19:09:04.966579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:04.563 [2024-07-25 19:09:04.966799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167530 ] 00:42:04.822 [2024-07-25 19:09:05.146239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:04.822 [2024-07-25 19:09:05.330035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:42:05.757 19:09:06 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.757 19:09:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:42:05.758 19:09:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:05.758 19:09:06 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.758 19:09:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:42:05.758 19:09:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "84a6ce51-4545-457f-9738-183b1d8a40bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "84a6ce51-4545-457f-9738-183b1d8a40bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:42:05.758 19:09:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:42:06.016 19:09:06 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:42:06.016 19:09:06 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:42:06.016 19:09:06 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:42:06.016 19:09:06 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 167530 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 167530 ']' 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 167530 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 167530 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 167530' 00:42:06.016 killing process with pid 167530 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 167530 00:42:06.016 19:09:06 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 167530 00:42:08.549 19:09:08 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:08.549 19:09:08 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:42:08.549 19:09:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:42:08.549 19:09:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:08.549 19:09:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:08.549 ************************************ 00:42:08.549 START TEST bdev_hello_world 00:42:08.549 ************************************ 00:42:08.549 19:09:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:42:08.549 [2024-07-25 19:09:08.925738] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:08.549 [2024-07-25 19:09:08.925982] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167620 ] 00:42:08.549 [2024-07-25 19:09:09.105105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.807 [2024-07-25 19:09:09.290775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:09.375 [2024-07-25 19:09:09.745981] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:09.375 [2024-07-25 19:09:09.746073] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:42:09.375 [2024-07-25 19:09:09.746110] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:09.375 [2024-07-25 19:09:09.748904] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:09.375 [2024-07-25 19:09:09.749557] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:09.375 [2024-07-25 19:09:09.749594] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:09.375 [2024-07-25 19:09:09.749812] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:09.375 00:42:09.375 [2024-07-25 19:09:09.749856] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:10.751 ************************************ 00:42:10.751 END TEST bdev_hello_world 00:42:10.751 ************************************ 00:42:10.751 00:42:10.751 real 0m2.295s 00:42:10.751 user 0m1.916s 00:42:10.751 sys 0m0.280s 00:42:10.751 19:09:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:10.751 19:09:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:10.751 19:09:11 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:42:10.751 19:09:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:10.751 19:09:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:10.751 19:09:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:10.751 ************************************ 00:42:10.751 START TEST bdev_bounds 00:42:10.751 ************************************ 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=167664 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:10.751 Process bdevio pid: 167664 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 167664' 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 167664 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 167664 ']' 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:10.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:10.751 19:09:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:10.751 [2024-07-25 19:09:11.290050] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:10.752 [2024-07-25 19:09:11.290497] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167664 ] 00:42:11.011 [2024-07-25 19:09:11.492090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:11.270 [2024-07-25 19:09:11.696615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:11.270 [2024-07-25 19:09:11.696646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:11.270 [2024-07-25 19:09:11.696643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:11.838 I/O targets: 00:42:11.838 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:42:11.838 00:42:11.838 00:42:11.838 CUnit - A unit testing framework for C - Version 2.1-3 00:42:11.838 http://cunit.sourceforge.net/ 00:42:11.838 00:42:11.838 00:42:11.838 Suite: bdevio tests on: Nvme0n1 00:42:11.838 Test: blockdev write read block ...passed 00:42:11.838 Test: blockdev write zeroes read block ...passed 00:42:11.838 Test: blockdev write zeroes read no split ...passed 00:42:11.838 Test: blockdev write zeroes read split ...passed 00:42:11.838 Test: blockdev write zeroes read split partial ...passed 00:42:11.838 Test: blockdev reset ...[2024-07-25 19:09:12.352880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:11.838 [2024-07-25 19:09:12.357009] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:11.838 passed 00:42:11.838 Test: blockdev write read 8 blocks ...passed 00:42:11.838 Test: blockdev write read size > 128k ...passed 00:42:11.838 Test: blockdev write read invalid size ...passed 00:42:11.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:11.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:11.838 Test: blockdev write read max offset ...passed 00:42:11.838 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:11.838 Test: blockdev writev readv 8 blocks ...passed 00:42:11.838 Test: blockdev writev readv 30 x 1block ...passed 00:42:11.838 Test: blockdev writev readv block ...passed 00:42:11.838 Test: blockdev writev readv size > 128k ...passed 00:42:11.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:11.838 Test: blockdev comparev and writev ...[2024-07-25 19:09:12.367057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x3660d000 len:0x1000 00:42:11.838 [2024-07-25 19:09:12.367134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:11.838 passed 00:42:11.838 Test: blockdev nvme passthru rw ...passed 00:42:11.838 Test: blockdev nvme passthru vendor specific ...[2024-07-25 19:09:12.367999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:42:11.838 passed 00:42:11.838 Test: blockdev nvme admin passthru ...[2024-07-25 19:09:12.368043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:42:11.838 passed 00:42:11.838 Test: blockdev copy ...passed 00:42:11.838 00:42:11.838 Run Summary: Type Total Ran Passed Failed Inactive 00:42:11.838 suites 1 1 n/a 0 0 00:42:11.838 tests 23 23 23 0 0 00:42:11.838 asserts 152 152 152 0 n/a 00:42:11.838 00:42:11.838 Elapsed time = 0.275 seconds 00:42:11.838 0 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 167664 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 167664 ']' 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 167664 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:11.838 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 167664 00:42:12.097 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:12.097 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:12.097 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 167664' 00:42:12.097 killing process with pid 167664 00:42:12.097 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 167664 00:42:12.097 19:09:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 167664 00:42:13.474 19:09:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:42:13.474 00:42:13.474 real 0m2.563s 00:42:13.474 user 0m5.806s 00:42:13.474 sys 0m0.392s 00:42:13.474 19:09:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:13.474 19:09:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:13.474 ************************************ 00:42:13.474 END TEST bdev_bounds 00:42:13.474 ************************************ 00:42:13.474 19:09:13 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:42:13.474 19:09:13 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:42:13.474 19:09:13 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:13.474 19:09:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:13.474 ************************************ 00:42:13.474 START TEST bdev_nbd 00:42:13.474 ************************************ 00:42:13.474 19:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1') 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1') 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=167735 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 167735 /var/tmp/spdk-nbd.sock 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 167735 ']' 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:13.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:13.475 19:09:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:13.475 [2024-07-25 19:09:13.900334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:13.475 [2024-07-25 19:09:13.900682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:13.734 [2024-07-25 19:09:14.059324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.734 [2024-07-25 19:09:14.260717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:14.301 19:09:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:14.560 1+0 records in 00:42:14.560 1+0 records out 00:42:14.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040274 s, 10.2 MB/s 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:14.560 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:14.819 { 00:42:14.819 "nbd_device": "/dev/nbd0", 00:42:14.819 "bdev_name": "Nvme0n1" 00:42:14.819 } 00:42:14.819 ]' 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:14.819 { 00:42:14.819 "nbd_device": "/dev/nbd0", 00:42:14.819 "bdev_name": "Nvme0n1" 00:42:14.819 } 00:42:14.819 ]' 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:14.819 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:15.078 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:15.078 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:15.078 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:15.079 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:15.338 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:15.338 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:15.338 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:15.597 19:09:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:42:15.856 /dev/nbd0 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:15.856 1+0 records in 00:42:15.856 1+0 records out 00:42:15.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530826 s, 7.7 MB/s 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:15.856 { 00:42:15.856 "nbd_device": "/dev/nbd0", 00:42:15.856 "bdev_name": "Nvme0n1" 00:42:15.856 } 00:42:15.856 ]' 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:15.856 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:15.856 { 00:42:15.856 "nbd_device": "/dev/nbd0", 00:42:15.856 "bdev_name": "Nvme0n1" 00:42:15.856 } 00:42:15.856 ]' 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:42:16.116 256+0 records in 00:42:16.116 256+0 records out 00:42:16.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104322 s, 101 MB/s 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:16.116 256+0 records in 00:42:16.116 256+0 records out 00:42:16.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0633448 s, 16.6 MB/s 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:16.116 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:16.375 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:16.375 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:16.375 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:16.375 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:16.375 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:16.375 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:16.376 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:16.376 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:16.376 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:16.376 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:16.376 19:09:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:42:16.635 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:42:16.893 malloc_lvol_verify 00:42:16.894 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:42:17.152 bab85087-a82d-4890-9a8a-7b630b816538 00:42:17.152 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:42:17.410 89564305-468e-4628-a336-14ca997c18e1 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:42:17.410 /dev/nbd0 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:42:17.410 mke2fs 1.46.5 (30-Dec-2021) 00:42:17.410 00:42:17.410 Filesystem too small for a journal 00:42:17.410 Discarding device blocks: 0/1024 done 00:42:17.410 Creating filesystem with 1024 4k blocks and 1024 inodes 00:42:17.410 00:42:17.410 Allocating group tables: 0/1 done 00:42:17.410 Writing inode tables: 0/1 done 00:42:17.410 Writing superblocks and filesystem accounting information: 0/1 done 00:42:17.410 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:17.410 19:09:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 167735 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 167735 ']' 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 167735 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 167735 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 167735' 00:42:17.668 killing process with pid 167735 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 167735 00:42:17.668 19:09:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 167735 00:42:19.046 ************************************ 00:42:19.046 END TEST bdev_nbd 00:42:19.046 ************************************ 00:42:19.046 19:09:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:42:19.046 00:42:19.046 real 0m5.562s 00:42:19.046 user 0m7.558s 00:42:19.046 sys 0m1.480s 00:42:19.046 19:09:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:19.046 19:09:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:19.046 19:09:19 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:42:19.046 19:09:19 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:42:19.046 19:09:19 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:42:19.046 skipping fio tests on NVMe due to multi-ns failures. 00:42:19.046 19:09:19 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:19.046 19:09:19 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:19.046 19:09:19 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:42:19.046 19:09:19 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:19.046 19:09:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:19.046 ************************************ 00:42:19.046 START TEST bdev_verify 00:42:19.046 ************************************ 00:42:19.046 19:09:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:19.046 [2024-07-25 19:09:19.529659] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:19.046 [2024-07-25 19:09:19.530014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167922 ] 00:42:19.305 [2024-07-25 19:09:19.692739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:19.564 [2024-07-25 19:09:19.925309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:19.564 [2024-07-25 19:09:19.925312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:20.130 Running I/O for 5 seconds... 00:42:25.393 00:42:25.393 Latency(us) 00:42:25.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:25.393 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:25.393 Verification LBA range: start 0x0 length 0xa0000 00:42:25.393 Nvme0n1 : 5.01 8996.98 35.14 0.00 0.00 14159.69 905.02 27712.37 00:42:25.393 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:25.393 Verification LBA range: start 0xa0000 length 0xa0000 00:42:25.393 Nvme0n1 : 5.01 9078.51 35.46 0.00 0.00 14030.44 1178.09 21595.67 00:42:25.393 =================================================================================================================== 00:42:25.393 Total : 18075.49 70.61 0.00 0.00 14094.79 905.02 27712.37 00:42:26.769 00:42:26.769 real 0m7.579s 00:42:26.769 user 0m13.768s 00:42:26.769 sys 0m0.324s 00:42:26.769 19:09:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:26.769 19:09:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:42:26.769 ************************************ 00:42:26.769 END TEST bdev_verify 00:42:26.769 ************************************ 00:42:26.769 19:09:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:26.769 19:09:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:42:26.769 19:09:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:26.769 19:09:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:26.769 ************************************ 00:42:26.769 START TEST bdev_verify_big_io 00:42:26.769 ************************************ 00:42:26.769 19:09:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:26.769 [2024-07-25 19:09:27.192473] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:26.769 [2024-07-25 19:09:27.192701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168038 ] 00:42:27.028 [2024-07-25 19:09:27.377110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:27.286 [2024-07-25 19:09:27.627849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.286 [2024-07-25 19:09:27.627850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:27.854 Running I/O for 5 seconds... 00:42:33.191 00:42:33.191 Latency(us) 00:42:33.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:33.191 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:33.191 Verification LBA range: start 0x0 length 0xa000 00:42:33.191 Nvme0n1 : 5.10 627.29 39.21 0.00 0.00 197998.60 458.36 225693.50 00:42:33.191 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:33.191 Verification LBA range: start 0xa000 length 0xa000 00:42:33.191 Nvme0n1 : 5.10 593.01 37.06 0.00 0.00 208759.36 1544.78 363506.35 00:42:33.191 =================================================================================================================== 00:42:33.191 Total : 1220.30 76.27 0.00 0.00 203225.94 458.36 363506.35 00:42:34.567 00:42:34.567 real 0m7.738s 00:42:34.567 user 0m14.020s 00:42:34.567 sys 0m0.332s 00:42:34.567 19:09:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:34.567 19:09:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:42:34.567 ************************************ 00:42:34.567 END TEST bdev_verify_big_io 00:42:34.567 ************************************ 00:42:34.567 19:09:34 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:34.567 19:09:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:34.567 19:09:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:34.567 19:09:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:34.567 ************************************ 00:42:34.567 START TEST bdev_write_zeroes 00:42:34.567 ************************************ 00:42:34.567 19:09:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:34.567 [2024-07-25 19:09:35.008924] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:34.567 [2024-07-25 19:09:35.009179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168140 ] 00:42:34.825 [2024-07-25 19:09:35.201642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.084 [2024-07-25 19:09:35.444491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.650 Running I/O for 1 seconds... 00:42:36.597 00:42:36.597 Latency(us) 00:42:36.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.597 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:36.597 Nvme0n1 : 1.00 69690.80 272.23 0.00 0.00 1832.48 573.44 11609.23 00:42:36.597 =================================================================================================================== 00:42:36.597 Total : 69690.80 272.23 0.00 0.00 1832.48 573.44 11609.23 00:42:37.975 00:42:37.975 real 0m3.519s 00:42:37.975 user 0m3.069s 00:42:37.975 sys 0m0.348s 00:42:37.975 19:09:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:37.975 19:09:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:42:37.975 ************************************ 00:42:37.975 END TEST bdev_write_zeroes 00:42:37.975 ************************************ 00:42:37.975 19:09:38 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:37.975 19:09:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:37.975 19:09:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:37.975 19:09:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:37.975 ************************************ 00:42:37.975 START TEST bdev_json_nonenclosed 00:42:37.975 ************************************ 00:42:37.975 19:09:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:38.234 [2024-07-25 19:09:38.589917] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:38.234 [2024-07-25 19:09:38.590174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168209 ] 00:42:38.234 [2024-07-25 19:09:38.776146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.493 [2024-07-25 19:09:39.023721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:38.493 [2024-07-25 19:09:39.023840] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:42:38.493 [2024-07-25 19:09:39.023911] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:38.493 [2024-07-25 19:09:39.023939] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:39.062 00:42:39.062 real 0m1.020s 00:42:39.062 user 0m0.720s 00:42:39.062 sys 0m0.200s 00:42:39.062 19:09:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:39.062 19:09:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:42:39.062 ************************************ 00:42:39.062 END TEST bdev_json_nonenclosed 00:42:39.062 ************************************ 00:42:39.062 19:09:39 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:39.062 19:09:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:39.062 19:09:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:39.062 19:09:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:39.062 ************************************ 00:42:39.062 START TEST bdev_json_nonarray 00:42:39.062 ************************************ 00:42:39.062 19:09:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:39.321 [2024-07-25 19:09:39.685275] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:39.321 [2024-07-25 19:09:39.685534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168248 ] 00:42:39.321 [2024-07-25 19:09:39.873302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:39.581 [2024-07-25 19:09:40.119233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.581 [2024-07-25 19:09:40.119365] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:42:39.581 [2024-07-25 19:09:40.119420] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:39.581 [2024-07-25 19:09:40.119450] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:40.149 00:42:40.149 real 0m1.024s 00:42:40.149 user 0m0.724s 00:42:40.149 sys 0m0.200s 00:42:40.149 19:09:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:40.149 ************************************ 00:42:40.149 END TEST bdev_json_nonarray 00:42:40.149 19:09:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:42:40.149 ************************************ 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:42:40.149 19:09:40 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:42:40.149 ************************************ 00:42:40.149 END TEST blockdev_nvme 00:42:40.149 ************************************ 00:42:40.149 00:42:40.149 real 0m35.950s 00:42:40.149 user 0m51.918s 00:42:40.149 sys 0m4.439s 00:42:40.149 19:09:40 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:40.149 19:09:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:40.408 19:09:40 -- spdk/autotest.sh@217 -- # uname -s 00:42:40.408 19:09:40 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:42:40.408 19:09:40 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:42:40.408 19:09:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:40.408 19:09:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:40.408 19:09:40 -- common/autotest_common.sh@10 -- # set +x 00:42:40.408 ************************************ 00:42:40.408 START TEST blockdev_nvme_gpt 00:42:40.408 ************************************ 00:42:40.408 19:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:42:40.408 * Looking for test storage... 00:42:40.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=168333 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 168333 00:42:40.408 19:09:40 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:40.408 19:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 168333 ']' 00:42:40.408 19:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.408 19:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:40.408 19:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.408 19:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:40.408 19:09:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:40.668 [2024-07-25 19:09:41.012703] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:40.668 [2024-07-25 19:09:41.012930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168333 ] 00:42:40.668 [2024-07-25 19:09:41.198207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.927 [2024-07-25 19:09:41.433497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:41.869 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:41.869 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:42:41.869 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:42:41.869 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:42:41.869 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:42:42.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:42.385 Waiting for block devices as requested 00:42:42.386 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:42.386 19:09:42 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1') 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:42:42.386 BYT; 00:42:42.386 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:42:42.386 BYT; 00:42:42.386 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:42:42.386 19:09:42 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:42:42.956 19:09:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:42:42.956 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:42.957 19:09:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:42.957 19:09:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:42.957 19:09:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:42.958 19:09:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:42.958 19:09:43 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:42:43.895 The operation has completed successfully. 00:42:43.895 19:09:44 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:42:44.831 The operation has completed successfully. 00:42:44.831 19:09:45 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:45.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:45.400 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:46.338 [] 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.338 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.338 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:46.597 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.597 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:42:46.597 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:42:46.598 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.598 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:42:46.598 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:46.598 19:09:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.598 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:42:46.598 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:42:46.598 19:09:46 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:42:46.598 19:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:42:46.598 19:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1p1 00:42:46.598 19:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:42:46.598 19:09:47 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 168333 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 168333 ']' 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 168333 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 168333 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:46.598 killing process with pid 168333 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 168333' 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 168333 00:42:46.598 19:09:47 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 168333 00:42:49.888 19:09:49 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:49.888 19:09:49 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:42:49.888 19:09:49 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:42:49.888 19:09:49 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:49.888 19:09:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:49.888 ************************************ 00:42:49.888 START TEST bdev_hello_world 00:42:49.888 ************************************ 00:42:49.888 19:09:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:42:49.888 [2024-07-25 19:09:49.900436] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:49.888 [2024-07-25 19:09:49.900668] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168780 ] 00:42:49.888 [2024-07-25 19:09:50.089212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.888 [2024-07-25 19:09:50.335792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.456 [2024-07-25 19:09:50.847420] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:50.456 [2024-07-25 19:09:50.847502] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:42:50.456 [2024-07-25 19:09:50.847546] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:50.456 [2024-07-25 19:09:50.850606] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:50.456 [2024-07-25 19:09:50.851382] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:50.456 [2024-07-25 19:09:50.851425] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:50.456 [2024-07-25 19:09:50.851691] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:50.456 00:42:50.456 [2024-07-25 19:09:50.851750] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:51.836 00:42:51.836 real 0m2.559s 00:42:51.836 user 0m2.095s 00:42:51.836 sys 0m0.364s 00:42:51.836 19:09:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:51.836 19:09:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:51.836 ************************************ 00:42:51.836 END TEST bdev_hello_world 00:42:51.836 ************************************ 00:42:52.095 19:09:52 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:42:52.095 19:09:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:52.095 19:09:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:52.095 19:09:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:52.095 ************************************ 00:42:52.095 START TEST bdev_bounds 00:42:52.095 ************************************ 00:42:52.095 19:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:42:52.095 19:09:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=168831 00:42:52.095 19:09:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:52.095 Process bdevio pid: 168831 00:42:52.095 19:09:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 168831' 00:42:52.095 19:09:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 168831 00:42:52.095 19:09:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:52.096 19:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 168831 ']' 00:42:52.096 19:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.096 19:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:52.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.096 19:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.096 19:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:52.096 19:09:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:52.096 [2024-07-25 19:09:52.520939] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:52.096 [2024-07-25 19:09:52.521169] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168831 ] 00:42:52.355 [2024-07-25 19:09:52.711638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:52.614 [2024-07-25 19:09:52.953409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:52.614 [2024-07-25 19:09:52.953592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.614 [2024-07-25 19:09:52.953593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:53.180 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:53.180 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:42:53.180 19:09:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:53.180 I/O targets: 00:42:53.180 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:42:53.181 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:42:53.181 00:42:53.181 00:42:53.181 CUnit - A unit testing framework for C - Version 2.1-3 00:42:53.181 http://cunit.sourceforge.net/ 00:42:53.181 00:42:53.181 00:42:53.181 Suite: bdevio tests on: Nvme0n1p2 00:42:53.181 Test: blockdev write read block ...passed 00:42:53.181 Test: blockdev write zeroes read block ...passed 00:42:53.181 Test: blockdev write zeroes read no split ...passed 00:42:53.181 Test: blockdev write zeroes read split ...passed 00:42:53.181 Test: blockdev write zeroes read split partial ...passed 00:42:53.181 Test: blockdev reset ...[2024-07-25 19:09:53.651221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:53.181 [2024-07-25 19:09:53.655656] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:53.181 passed 00:42:53.181 Test: blockdev write read 8 blocks ...passed 00:42:53.181 Test: blockdev write read size > 128k ...passed 00:42:53.181 Test: blockdev write read invalid size ...passed 00:42:53.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:53.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:53.181 Test: blockdev write read max offset ...passed 00:42:53.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:53.181 Test: blockdev writev readv 8 blocks ...passed 00:42:53.181 Test: blockdev writev readv 30 x 1block ...passed 00:42:53.181 Test: blockdev writev readv block ...passed 00:42:53.181 Test: blockdev writev readv size > 128k ...passed 00:42:53.181 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:53.181 Test: blockdev comparev and writev ...[2024-07-25 19:09:53.664948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x840d000 len:0x1000 00:42:53.181 [2024-07-25 19:09:53.665037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:53.181 passed 00:42:53.181 Test: blockdev nvme passthru rw ...passed 00:42:53.181 Test: blockdev nvme passthru vendor specific ...passed 00:42:53.181 Test: blockdev nvme admin passthru ...passed 00:42:53.181 Test: blockdev copy ...passed 00:42:53.181 Suite: bdevio tests on: Nvme0n1p1 00:42:53.181 Test: blockdev write read block ...passed 00:42:53.181 Test: blockdev write zeroes read block ...passed 00:42:53.181 Test: blockdev write zeroes read no split ...passed 00:42:53.181 Test: blockdev write zeroes read split ...passed 00:42:53.181 Test: blockdev write zeroes read split partial ...passed 00:42:53.181 Test: blockdev reset ...[2024-07-25 19:09:53.733457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:53.181 [2024-07-25 19:09:53.737482] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:53.181 passed 00:42:53.181 Test: blockdev write read 8 blocks ...passed 00:42:53.181 Test: blockdev write read size > 128k ...passed 00:42:53.181 Test: blockdev write read invalid size ...passed 00:42:53.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:53.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:53.181 Test: blockdev write read max offset ...passed 00:42:53.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:53.181 Test: blockdev writev readv 8 blocks ...passed 00:42:53.181 Test: blockdev writev readv 30 x 1block ...passed 00:42:53.181 Test: blockdev writev readv block ...passed 00:42:53.181 Test: blockdev writev readv size > 128k ...passed 00:42:53.181 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:53.181 Test: blockdev comparev and writev ...[2024-07-25 19:09:53.747687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x8409000 len:0x1000 00:42:53.181 [2024-07-25 19:09:53.747862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:53.181 passed 00:42:53.181 Test: blockdev nvme passthru rw ...passed 00:42:53.181 Test: blockdev nvme passthru vendor specific ...passed 00:42:53.181 Test: blockdev nvme admin passthru ...passed 00:42:53.181 Test: blockdev copy ...passed 00:42:53.181 00:42:53.181 Run Summary: Type Total Ran Passed Failed Inactive 00:42:53.181 suites 2 2 n/a 0 0 00:42:53.181 tests 46 46 46 0 0 00:42:53.181 asserts 284 284 284 0 n/a 00:42:53.181 00:42:53.181 Elapsed time = 0.423 seconds 00:42:53.181 0 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 168831 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 168831 ']' 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 168831 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 168831 00:42:53.438 killing process with pid 168831 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 168831' 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 168831 00:42:53.438 19:09:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 168831 00:42:54.812 ************************************ 00:42:54.812 END TEST bdev_bounds 00:42:54.812 ************************************ 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:42:54.812 00:42:54.812 real 0m2.707s 00:42:54.812 user 0m6.063s 00:42:54.812 sys 0m0.458s 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:54.812 19:09:55 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:42:54.812 19:09:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:42:54.812 19:09:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:54.812 19:09:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:54.812 ************************************ 00:42:54.812 START TEST bdev_nbd 00:42:54.812 ************************************ 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=168902 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 168902 /var/tmp/spdk-nbd.sock 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 168902 ']' 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:54.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:54.812 19:09:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:54.812 [2024-07-25 19:09:55.291359] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:54.812 [2024-07-25 19:09:55.291515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:55.070 [2024-07-25 19:09:55.455856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.328 [2024-07-25 19:09:55.665298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.587 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:55.588 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:55.847 1+0 records in 00:42:55.847 1+0 records out 00:42:55.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580053 s, 7.1 MB/s 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:55.847 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:56.106 1+0 records in 00:42:56.106 1+0 records out 00:42:56.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623575 s, 6.6 MB/s 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:56.106 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:56.365 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:56.365 { 00:42:56.365 "nbd_device": "/dev/nbd0", 00:42:56.365 "bdev_name": "Nvme0n1p1" 00:42:56.365 }, 00:42:56.365 { 00:42:56.365 "nbd_device": "/dev/nbd1", 00:42:56.365 "bdev_name": "Nvme0n1p2" 00:42:56.365 } 00:42:56.365 ]' 00:42:56.365 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:56.365 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:56.365 { 00:42:56.365 "nbd_device": "/dev/nbd0", 00:42:56.365 "bdev_name": "Nvme0n1p1" 00:42:56.365 }, 00:42:56.365 { 00:42:56.365 "nbd_device": "/dev/nbd1", 00:42:56.365 "bdev_name": "Nvme0n1p2" 00:42:56.365 } 00:42:56.365 ]' 00:42:56.365 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:56.624 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:56.624 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:56.624 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:56.624 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:56.624 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:56.624 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:56.624 19:09:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:56.883 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:57.141 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:57.141 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:57.141 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:57.141 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.141 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:57.400 19:09:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:42:57.658 /dev/nbd0 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:57.658 1+0 records in 00:42:57.658 1+0 records out 00:42:57.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516868 s, 7.9 MB/s 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:57.658 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:42:57.917 /dev/nbd1 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:57.917 1+0 records in 00:42:57.917 1+0 records out 00:42:57.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455509 s, 9.0 MB/s 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.917 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:58.176 { 00:42:58.176 "nbd_device": "/dev/nbd0", 00:42:58.176 "bdev_name": "Nvme0n1p1" 00:42:58.176 }, 00:42:58.176 { 00:42:58.176 "nbd_device": "/dev/nbd1", 00:42:58.176 "bdev_name": "Nvme0n1p2" 00:42:58.176 } 00:42:58.176 ]' 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:58.176 { 00:42:58.176 "nbd_device": "/dev/nbd0", 00:42:58.176 "bdev_name": "Nvme0n1p1" 00:42:58.176 }, 00:42:58.176 { 00:42:58.176 "nbd_device": "/dev/nbd1", 00:42:58.176 "bdev_name": "Nvme0n1p2" 00:42:58.176 } 00:42:58.176 ]' 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:42:58.176 /dev/nbd1' 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:42:58.176 /dev/nbd1' 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:42:58.176 256+0 records in 00:42:58.176 256+0 records out 00:42:58.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005488 s, 191 MB/s 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:58.176 256+0 records in 00:42:58.176 256+0 records out 00:42:58.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.087499 s, 12.0 MB/s 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:58.176 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:42:58.435 256+0 records in 00:42:58.435 256+0 records out 00:42:58.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0806385 s, 13.0 MB/s 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:58.436 19:09:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:58.695 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:58.955 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:42:59.215 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:42:59.474 malloc_lvol_verify 00:42:59.474 19:09:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:42:59.761 a3078d1f-861d-4116-9bdc-7a4418366d70 00:42:59.761 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:00.045 bc279763-1322-4f7a-b4a8-13d93d93c1fc 00:43:00.045 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:00.045 /dev/nbd0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:00.304 mke2fs 1.46.5 (30-Dec-2021) 00:43:00.304 00:43:00.304 Filesystem too small for a journal 00:43:00.304 Discarding device blocks: 0/1024 done 00:43:00.304 Creating filesystem with 1024 4k blocks and 1024 inodes 00:43:00.304 00:43:00.304 Allocating group tables: 0/1 done 00:43:00.304 Writing inode tables: 0/1 done 00:43:00.304 Writing superblocks and filesystem accounting information: 0/1 done 00:43:00.304 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 168902 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 168902 ']' 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 168902 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 168902 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 168902' 00:43:00.304 killing process with pid 168902 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 168902 00:43:00.304 19:10:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 168902 00:43:01.687 19:10:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:43:01.687 00:43:01.687 real 0m6.875s 00:43:01.687 user 0m9.273s 00:43:01.687 sys 0m2.157s 00:43:01.687 19:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:01.687 19:10:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:01.687 ************************************ 00:43:01.687 END TEST bdev_nbd 00:43:01.687 ************************************ 00:43:01.687 19:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:43:01.687 19:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:43:01.687 19:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:43:01.687 skipping fio tests on NVMe due to multi-ns failures. 00:43:01.687 19:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:43:01.688 19:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:01.688 19:10:02 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:01.688 19:10:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:43:01.688 19:10:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:01.688 19:10:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:01.688 ************************************ 00:43:01.688 START TEST bdev_verify 00:43:01.688 ************************************ 00:43:01.688 19:10:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:01.688 [2024-07-25 19:10:02.248459] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:01.688 [2024-07-25 19:10:02.248680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169150 ] 00:43:01.953 [2024-07-25 19:10:02.432663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:02.211 [2024-07-25 19:10:02.694558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.211 [2024-07-25 19:10:02.694560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:02.777 Running I/O for 5 seconds... 00:43:08.040 00:43:08.040 Latency(us) 00:43:08.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:08.040 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:08.040 Verification LBA range: start 0x0 length 0x4ff80 00:43:08.040 Nvme0n1p1 : 5.03 3871.17 15.12 0.00 0.00 32972.58 3542.06 29085.50 00:43:08.040 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:08.040 Verification LBA range: start 0x4ff80 length 0x4ff80 00:43:08.040 Nvme0n1p1 : 5.02 4052.51 15.83 0.00 0.00 31492.07 6491.18 29085.50 00:43:08.040 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:08.040 Verification LBA range: start 0x0 length 0x4ff7f 00:43:08.040 Nvme0n1p2 : 5.03 3870.24 15.12 0.00 0.00 32905.16 3042.74 30833.13 00:43:08.040 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:08.040 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:43:08.040 Nvme0n1p2 : 5.02 4050.80 15.82 0.00 0.00 31459.07 5804.62 26963.38 00:43:08.040 =================================================================================================================== 00:43:08.040 Total : 15844.73 61.89 0.00 0.00 32190.75 3042.74 30833.13 00:43:09.415 00:43:09.415 real 0m7.671s 00:43:09.415 user 0m13.864s 00:43:09.415 sys 0m0.335s 00:43:09.415 19:10:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:09.415 19:10:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:09.415 ************************************ 00:43:09.415 END TEST bdev_verify 00:43:09.415 ************************************ 00:43:09.415 19:10:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:09.415 19:10:09 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:43:09.415 19:10:09 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:09.415 19:10:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:09.415 ************************************ 00:43:09.415 START TEST bdev_verify_big_io 00:43:09.415 ************************************ 00:43:09.415 19:10:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:09.415 [2024-07-25 19:10:09.988771] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:09.415 [2024-07-25 19:10:09.988990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169256 ] 00:43:09.674 [2024-07-25 19:10:10.178314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:09.932 [2024-07-25 19:10:10.437142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:09.932 [2024-07-25 19:10:10.437117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:10.498 Running I/O for 5 seconds... 00:43:17.058 00:43:17.058 Latency(us) 00:43:17.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:17.058 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:17.058 Verification LBA range: start 0x0 length 0x4ff8 00:43:17.058 Nvme0n1p1 : 5.13 299.26 18.70 0.00 0.00 414375.87 6459.98 477351.74 00:43:17.058 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:17.058 Verification LBA range: start 0x4ff8 length 0x4ff8 00:43:17.058 Nvme0n1p1 : 5.34 286.97 17.94 0.00 0.00 433237.40 7801.90 499321.90 00:43:17.058 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:17.058 Verification LBA range: start 0x0 length 0x4ff7 00:43:17.058 Nvme0n1p2 : 5.33 317.07 19.82 0.00 0.00 374978.81 682.67 483343.60 00:43:17.058 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:17.058 Verification LBA range: start 0x4ff7 length 0x4ff7 00:43:17.058 Nvme0n1p2 : 5.34 287.48 17.97 0.00 0.00 411338.95 893.32 491332.75 00:43:17.058 =================================================================================================================== 00:43:17.058 Total : 1190.78 74.42 0.00 0.00 407642.07 682.67 499321.90 00:43:17.626 00:43:17.626 real 0m8.214s 00:43:17.626 user 0m14.893s 00:43:17.626 sys 0m0.391s 00:43:17.626 19:10:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:17.626 19:10:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:17.626 ************************************ 00:43:17.626 END TEST bdev_verify_big_io 00:43:17.626 ************************************ 00:43:17.626 19:10:18 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:17.626 19:10:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:43:17.626 19:10:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:17.626 19:10:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:17.626 ************************************ 00:43:17.626 START TEST bdev_write_zeroes 00:43:17.626 ************************************ 00:43:17.626 19:10:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:17.885 [2024-07-25 19:10:18.267514] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:17.885 [2024-07-25 19:10:18.267710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169373 ] 00:43:17.885 [2024-07-25 19:10:18.449017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.143 [2024-07-25 19:10:18.687911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.709 Running I/O for 1 seconds... 00:43:20.083 00:43:20.083 Latency(us) 00:43:20.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.083 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:20.083 Nvme0n1p1 : 1.00 26802.91 104.70 0.00 0.00 4764.68 2917.91 12982.37 00:43:20.083 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:20.083 Nvme0n1p2 : 1.01 26782.48 104.62 0.00 0.00 4766.61 2122.12 10360.93 00:43:20.083 =================================================================================================================== 00:43:20.083 Total : 53585.39 209.32 0.00 0.00 4765.65 2122.12 12982.37 00:43:21.461 00:43:21.462 real 0m3.487s 00:43:21.462 user 0m3.057s 00:43:21.462 sys 0m0.329s 00:43:21.462 19:10:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:21.462 19:10:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:21.462 ************************************ 00:43:21.462 END TEST bdev_write_zeroes 00:43:21.462 ************************************ 00:43:21.462 19:10:21 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:21.462 19:10:21 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:43:21.462 19:10:21 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:21.462 19:10:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:21.462 ************************************ 00:43:21.462 START TEST bdev_json_nonenclosed 00:43:21.462 ************************************ 00:43:21.462 19:10:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:21.462 [2024-07-25 19:10:21.834175] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:21.462 [2024-07-25 19:10:21.834497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169436 ] 00:43:21.462 [2024-07-25 19:10:22.018898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.720 [2024-07-25 19:10:22.270287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.720 [2024-07-25 19:10:22.270416] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:21.720 [2024-07-25 19:10:22.270466] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:21.720 [2024-07-25 19:10:22.270495] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:22.287 00:43:22.287 real 0m1.021s 00:43:22.287 user 0m0.729s 00:43:22.287 sys 0m0.192s 00:43:22.287 19:10:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:22.287 ************************************ 00:43:22.287 19:10:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:22.287 END TEST bdev_json_nonenclosed 00:43:22.287 ************************************ 00:43:22.287 19:10:22 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:22.287 19:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:43:22.287 19:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:22.287 19:10:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:22.287 ************************************ 00:43:22.287 START TEST bdev_json_nonarray 00:43:22.287 ************************************ 00:43:22.287 19:10:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:22.547 [2024-07-25 19:10:22.916270] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:22.547 [2024-07-25 19:10:22.916512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169475 ] 00:43:22.547 [2024-07-25 19:10:23.098115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:22.806 [2024-07-25 19:10:23.345638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.806 [2024-07-25 19:10:23.345782] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:22.806 [2024-07-25 19:10:23.345836] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:22.806 [2024-07-25 19:10:23.345865] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:23.372 00:43:23.372 real 0m1.002s 00:43:23.372 user 0m0.709s 00:43:23.372 sys 0m0.193s 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:23.372 ************************************ 00:43:23.372 END TEST bdev_json_nonarray 00:43:23.372 ************************************ 00:43:23.372 19:10:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:43:23.372 19:10:23 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:43:23.372 19:10:23 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:43:23.372 19:10:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:23.372 19:10:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:23.372 19:10:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:23.372 ************************************ 00:43:23.372 START TEST bdev_gpt_uuid 00:43:23.372 ************************************ 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=169507 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 169507 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 169507 ']' 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:23.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:23.372 19:10:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:23.630 [2024-07-25 19:10:24.014640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:23.630 [2024-07-25 19:10:24.014873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169507 ] 00:43:23.630 [2024-07-25 19:10:24.196892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:23.888 [2024-07-25 19:10:24.415988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:24.823 Some configs were skipped because the RPC state that can call them passed over. 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:24.823 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:43:24.823 { 00:43:24.823 "name": "Nvme0n1p1", 00:43:24.823 "aliases": [ 00:43:24.823 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:43:24.823 ], 00:43:24.823 "product_name": "GPT Disk", 00:43:24.823 "block_size": 4096, 00:43:24.823 "num_blocks": 655104, 00:43:24.823 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:43:24.823 "assigned_rate_limits": { 00:43:24.823 "rw_ios_per_sec": 0, 00:43:24.824 "rw_mbytes_per_sec": 0, 00:43:24.824 "r_mbytes_per_sec": 0, 00:43:24.824 "w_mbytes_per_sec": 0 00:43:24.824 }, 00:43:24.824 "claimed": false, 00:43:24.824 "zoned": false, 00:43:24.824 "supported_io_types": { 00:43:24.824 "read": true, 00:43:24.824 "write": true, 00:43:24.824 "unmap": true, 00:43:24.824 "flush": true, 00:43:24.824 "reset": true, 00:43:24.824 "nvme_admin": false, 00:43:24.824 "nvme_io": false, 00:43:24.824 "nvme_io_md": false, 00:43:24.824 "write_zeroes": true, 00:43:24.824 "zcopy": false, 00:43:24.824 "get_zone_info": false, 00:43:24.824 "zone_management": false, 00:43:24.824 "zone_append": false, 00:43:24.824 "compare": true, 00:43:24.824 "compare_and_write": false, 00:43:24.824 "abort": true, 00:43:24.824 "seek_hole": false, 00:43:24.824 "seek_data": false, 00:43:24.824 "copy": true, 00:43:24.824 "nvme_iov_md": false 00:43:24.824 }, 00:43:24.824 "driver_specific": { 00:43:24.824 "gpt": { 00:43:24.824 "base_bdev": "Nvme0n1", 00:43:24.824 "offset_blocks": 256, 00:43:24.824 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:43:24.824 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:43:24.824 "partition_name": "SPDK_TEST_first" 00:43:24.824 } 00:43:24.824 } 00:43:24.824 } 00:43:24.824 ]' 00:43:24.824 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:25.082 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:43:25.082 { 00:43:25.082 "name": "Nvme0n1p2", 00:43:25.082 "aliases": [ 00:43:25.082 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:43:25.082 ], 00:43:25.082 "product_name": "GPT Disk", 00:43:25.082 "block_size": 4096, 00:43:25.082 "num_blocks": 655103, 00:43:25.082 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:43:25.082 "assigned_rate_limits": { 00:43:25.082 "rw_ios_per_sec": 0, 00:43:25.082 "rw_mbytes_per_sec": 0, 00:43:25.082 "r_mbytes_per_sec": 0, 00:43:25.082 "w_mbytes_per_sec": 0 00:43:25.082 }, 00:43:25.082 "claimed": false, 00:43:25.082 "zoned": false, 00:43:25.082 "supported_io_types": { 00:43:25.082 "read": true, 00:43:25.082 "write": true, 00:43:25.082 "unmap": true, 00:43:25.082 "flush": true, 00:43:25.082 "reset": true, 00:43:25.082 "nvme_admin": false, 00:43:25.082 "nvme_io": false, 00:43:25.082 "nvme_io_md": false, 00:43:25.082 "write_zeroes": true, 00:43:25.082 "zcopy": false, 00:43:25.082 "get_zone_info": false, 00:43:25.082 "zone_management": false, 00:43:25.082 "zone_append": false, 00:43:25.082 "compare": true, 00:43:25.082 "compare_and_write": false, 00:43:25.082 "abort": true, 00:43:25.082 "seek_hole": false, 00:43:25.082 "seek_data": false, 00:43:25.082 "copy": true, 00:43:25.082 "nvme_iov_md": false 00:43:25.082 }, 00:43:25.082 "driver_specific": { 00:43:25.082 "gpt": { 00:43:25.082 "base_bdev": "Nvme0n1", 00:43:25.082 "offset_blocks": 655360, 00:43:25.082 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:43:25.083 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:43:25.083 "partition_name": "SPDK_TEST_second" 00:43:25.083 } 00:43:25.083 } 00:43:25.083 } 00:43:25.083 ]' 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 169507 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 169507 ']' 00:43:25.083 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 169507 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 169507 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:25.341 killing process with pid 169507 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 169507' 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 169507 00:43:25.341 19:10:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 169507 00:43:27.873 00:43:27.873 real 0m4.450s 00:43:27.873 user 0m4.397s 00:43:27.873 sys 0m0.668s 00:43:27.873 19:10:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:27.873 ************************************ 00:43:27.873 19:10:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:27.873 END TEST bdev_gpt_uuid 00:43:27.873 ************************************ 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:43:27.873 19:10:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:43:28.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:28.440 Waiting for block devices as requested 00:43:28.440 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:28.700 19:10:29 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:43:28.700 19:10:29 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:43:28.700 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:43:28.700 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:43:28.700 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:43:28.700 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:43:28.700 19:10:29 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:43:28.700 00:43:28.700 real 0m48.334s 00:43:28.700 user 1m4.953s 00:43:28.700 sys 0m8.290s 00:43:28.700 19:10:29 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:28.700 19:10:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:28.700 ************************************ 00:43:28.700 END TEST blockdev_nvme_gpt 00:43:28.700 ************************************ 00:43:28.700 19:10:29 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:43:28.700 19:10:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:28.700 19:10:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:28.700 19:10:29 -- common/autotest_common.sh@10 -- # set +x 00:43:28.700 ************************************ 00:43:28.700 START TEST nvme 00:43:28.700 ************************************ 00:43:28.700 19:10:29 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:43:28.700 * Looking for test storage... 00:43:28.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:43:28.959 19:10:29 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:29.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:29.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:43:30.416 19:10:30 nvme -- nvme/nvme.sh@79 -- # uname 00:43:30.416 19:10:30 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:43:30.416 19:10:30 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:43:30.416 19:10:30 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1071 -- # stubpid=169936 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:43:30.416 Waiting for stub to ready for secondary processes... 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/169936 ]] 00:43:30.416 19:10:30 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:43:30.676 [2024-07-25 19:10:31.053093] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:30.676 [2024-07-25 19:10:31.053320] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:43:31.613 19:10:31 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:31.613 19:10:31 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/169936 ]] 00:43:31.613 19:10:31 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:43:31.613 [2024-07-25 19:10:32.132007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:31.872 [2024-07-25 19:10:32.402514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:31.872 [2024-07-25 19:10:32.402638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:43:31.873 [2024-07-25 19:10:32.402811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:31.873 [2024-07-25 19:10:32.412941] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:43:31.873 [2024-07-25 19:10:32.413118] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:43:31.873 [2024-07-25 19:10:32.428410] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:43:31.873 [2024-07-25 19:10:32.428877] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:43:32.441 19:10:32 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:32.441 done. 00:43:32.441 19:10:32 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:43:32.441 19:10:32 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:43:32.441 19:10:32 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:43:32.441 19:10:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:32.441 19:10:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:32.441 ************************************ 00:43:32.441 START TEST nvme_reset 00:43:32.441 ************************************ 00:43:32.441 19:10:33 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:43:33.009 Initializing NVMe Controllers 00:43:33.009 Skipping QEMU NVMe SSD at 0000:00:10.0 00:43:33.009 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:43:33.009 00:43:33.009 real 0m0.329s 00:43:33.009 user 0m0.125s 00:43:33.009 sys 0m0.123s 00:43:33.009 19:10:33 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:33.009 19:10:33 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:43:33.009 ************************************ 00:43:33.009 END TEST nvme_reset 00:43:33.009 ************************************ 00:43:33.009 19:10:33 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:43:33.009 19:10:33 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:33.009 19:10:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:33.009 19:10:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:33.009 ************************************ 00:43:33.009 START TEST nvme_identify 00:43:33.009 ************************************ 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:43:33.009 19:10:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:43:33.009 19:10:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:43:33.009 19:10:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:43:33.009 19:10:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:43:33.009 19:10:33 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:43:33.009 19:10:33 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:43:33.301 [2024-07-25 19:10:33.765734] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 169973 terminated unexpected 00:43:33.301 ===================================================== 00:43:33.301 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:33.301 ===================================================== 00:43:33.301 Controller Capabilities/Features 00:43:33.301 ================================ 00:43:33.301 Vendor ID: 1b36 00:43:33.301 Subsystem Vendor ID: 1af4 00:43:33.301 Serial Number: 12340 00:43:33.301 Model Number: QEMU NVMe Ctrl 00:43:33.301 Firmware Version: 8.0.0 00:43:33.301 Recommended Arb Burst: 6 00:43:33.301 IEEE OUI Identifier: 00 54 52 00:43:33.301 Multi-path I/O 00:43:33.301 May have multiple subsystem ports: No 00:43:33.301 May have multiple controllers: No 00:43:33.301 Associated with SR-IOV VF: No 00:43:33.301 Max Data Transfer Size: 524288 00:43:33.301 Max Number of Namespaces: 256 00:43:33.301 Max Number of I/O Queues: 64 00:43:33.301 NVMe Specification Version (VS): 1.4 00:43:33.301 NVMe Specification Version (Identify): 1.4 00:43:33.301 Maximum Queue Entries: 2048 00:43:33.301 Contiguous Queues Required: Yes 00:43:33.301 Arbitration Mechanisms Supported 00:43:33.301 Weighted Round Robin: Not Supported 00:43:33.301 Vendor Specific: Not Supported 00:43:33.301 Reset Timeout: 7500 ms 00:43:33.301 Doorbell Stride: 4 bytes 00:43:33.301 NVM Subsystem Reset: Not Supported 00:43:33.301 Command Sets Supported 00:43:33.301 NVM Command Set: Supported 00:43:33.301 Boot Partition: Not Supported 00:43:33.301 Memory Page Size Minimum: 4096 bytes 00:43:33.301 Memory Page Size Maximum: 65536 bytes 00:43:33.302 Persistent Memory Region: Not Supported 00:43:33.302 Optional Asynchronous Events Supported 00:43:33.302 Namespace Attribute Notices: Supported 00:43:33.302 Firmware Activation Notices: Not Supported 00:43:33.302 ANA Change Notices: Not Supported 00:43:33.302 PLE Aggregate Log Change Notices: Not Supported 00:43:33.302 LBA Status Info Alert Notices: Not Supported 00:43:33.302 EGE Aggregate Log Change Notices: Not Supported 00:43:33.302 Normal NVM Subsystem Shutdown event: Not Supported 00:43:33.302 Zone Descriptor Change Notices: Not Supported 00:43:33.302 Discovery Log Change Notices: Not Supported 00:43:33.302 Controller Attributes 00:43:33.302 128-bit Host Identifier: Not Supported 00:43:33.302 Non-Operational Permissive Mode: Not Supported 00:43:33.302 NVM Sets: Not Supported 00:43:33.302 Read Recovery Levels: Not Supported 00:43:33.302 Endurance Groups: Not Supported 00:43:33.302 Predictable Latency Mode: Not Supported 00:43:33.302 Traffic Based Keep ALive: Not Supported 00:43:33.302 Namespace Granularity: Not Supported 00:43:33.302 SQ Associations: Not Supported 00:43:33.302 UUID List: Not Supported 00:43:33.302 Multi-Domain Subsystem: Not Supported 00:43:33.302 Fixed Capacity Management: Not Supported 00:43:33.302 Variable Capacity Management: Not Supported 00:43:33.302 Delete Endurance Group: Not Supported 00:43:33.302 Delete NVM Set: Not Supported 00:43:33.302 Extended LBA Formats Supported: Supported 00:43:33.302 Flexible Data Placement Supported: Not Supported 00:43:33.302 00:43:33.302 Controller Memory Buffer Support 00:43:33.302 ================================ 00:43:33.302 Supported: No 00:43:33.302 00:43:33.302 Persistent Memory Region Support 00:43:33.302 ================================ 00:43:33.302 Supported: No 00:43:33.302 00:43:33.302 Admin Command Set Attributes 00:43:33.302 ============================ 00:43:33.302 Security Send/Receive: Not Supported 00:43:33.302 Format NVM: Supported 00:43:33.302 Firmware Activate/Download: Not Supported 00:43:33.302 Namespace Management: Supported 00:43:33.302 Device Self-Test: Not Supported 00:43:33.302 Directives: Supported 00:43:33.302 NVMe-MI: Not Supported 00:43:33.302 Virtualization Management: Not Supported 00:43:33.302 Doorbell Buffer Config: Supported 00:43:33.302 Get LBA Status Capability: Not Supported 00:43:33.302 Command & Feature Lockdown Capability: Not Supported 00:43:33.302 Abort Command Limit: 4 00:43:33.302 Async Event Request Limit: 4 00:43:33.302 Number of Firmware Slots: N/A 00:43:33.302 Firmware Slot 1 Read-Only: N/A 00:43:33.302 Firmware Activation Without Reset: N/A 00:43:33.302 Multiple Update Detection Support: N/A 00:43:33.302 Firmware Update Granularity: No Information Provided 00:43:33.302 Per-Namespace SMART Log: Yes 00:43:33.302 Asymmetric Namespace Access Log Page: Not Supported 00:43:33.302 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:43:33.302 Command Effects Log Page: Supported 00:43:33.302 Get Log Page Extended Data: Supported 00:43:33.302 Telemetry Log Pages: Not Supported 00:43:33.302 Persistent Event Log Pages: Not Supported 00:43:33.302 Supported Log Pages Log Page: May Support 00:43:33.302 Commands Supported & Effects Log Page: Not Supported 00:43:33.302 Feature Identifiers & Effects Log Page:May Support 00:43:33.302 NVMe-MI Commands & Effects Log Page: May Support 00:43:33.302 Data Area 4 for Telemetry Log: Not Supported 00:43:33.302 Error Log Page Entries Supported: 1 00:43:33.302 Keep Alive: Not Supported 00:43:33.302 00:43:33.302 NVM Command Set Attributes 00:43:33.302 ========================== 00:43:33.302 Submission Queue Entry Size 00:43:33.302 Max: 64 00:43:33.302 Min: 64 00:43:33.302 Completion Queue Entry Size 00:43:33.302 Max: 16 00:43:33.302 Min: 16 00:43:33.302 Number of Namespaces: 256 00:43:33.302 Compare Command: Supported 00:43:33.302 Write Uncorrectable Command: Not Supported 00:43:33.302 Dataset Management Command: Supported 00:43:33.302 Write Zeroes Command: Supported 00:43:33.302 Set Features Save Field: Supported 00:43:33.302 Reservations: Not Supported 00:43:33.302 Timestamp: Supported 00:43:33.302 Copy: Supported 00:43:33.302 Volatile Write Cache: Present 00:43:33.302 Atomic Write Unit (Normal): 1 00:43:33.302 Atomic Write Unit (PFail): 1 00:43:33.302 Atomic Compare & Write Unit: 1 00:43:33.302 Fused Compare & Write: Not Supported 00:43:33.302 Scatter-Gather List 00:43:33.302 SGL Command Set: Supported 00:43:33.302 SGL Keyed: Not Supported 00:43:33.302 SGL Bit Bucket Descriptor: Not Supported 00:43:33.302 SGL Metadata Pointer: Not Supported 00:43:33.302 Oversized SGL: Not Supported 00:43:33.302 SGL Metadata Address: Not Supported 00:43:33.302 SGL Offset: Not Supported 00:43:33.302 Transport SGL Data Block: Not Supported 00:43:33.302 Replay Protected Memory Block: Not Supported 00:43:33.302 00:43:33.302 Firmware Slot Information 00:43:33.302 ========================= 00:43:33.302 Active slot: 1 00:43:33.302 Slot 1 Firmware Revision: 1.0 00:43:33.302 00:43:33.302 00:43:33.302 Commands Supported and Effects 00:43:33.302 ============================== 00:43:33.302 Admin Commands 00:43:33.302 -------------- 00:43:33.302 Delete I/O Submission Queue (00h): Supported 00:43:33.302 Create I/O Submission Queue (01h): Supported 00:43:33.302 Get Log Page (02h): Supported 00:43:33.302 Delete I/O Completion Queue (04h): Supported 00:43:33.302 Create I/O Completion Queue (05h): Supported 00:43:33.302 Identify (06h): Supported 00:43:33.302 Abort (08h): Supported 00:43:33.302 Set Features (09h): Supported 00:43:33.302 Get Features (0Ah): Supported 00:43:33.302 Asynchronous Event Request (0Ch): Supported 00:43:33.302 Namespace Attachment (15h): Supported NS-Inventory-Change 00:43:33.302 Directive Send (19h): Supported 00:43:33.302 Directive Receive (1Ah): Supported 00:43:33.302 Virtualization Management (1Ch): Supported 00:43:33.302 Doorbell Buffer Config (7Ch): Supported 00:43:33.302 Format NVM (80h): Supported LBA-Change 00:43:33.302 I/O Commands 00:43:33.302 ------------ 00:43:33.302 Flush (00h): Supported LBA-Change 00:43:33.302 Write (01h): Supported LBA-Change 00:43:33.302 Read (02h): Supported 00:43:33.302 Compare (05h): Supported 00:43:33.302 Write Zeroes (08h): Supported LBA-Change 00:43:33.302 Dataset Management (09h): Supported LBA-Change 00:43:33.302 Unknown (0Ch): Supported 00:43:33.302 Unknown (12h): Supported 00:43:33.302 Copy (19h): Supported LBA-Change 00:43:33.302 Unknown (1Dh): Supported LBA-Change 00:43:33.302 00:43:33.302 Error Log 00:43:33.302 ========= 00:43:33.302 00:43:33.302 Arbitration 00:43:33.302 =========== 00:43:33.302 Arbitration Burst: no limit 00:43:33.302 00:43:33.302 Power Management 00:43:33.302 ================ 00:43:33.302 Number of Power States: 1 00:43:33.302 Current Power State: Power State #0 00:43:33.302 Power State #0: 00:43:33.302 Max Power: 25.00 W 00:43:33.302 Non-Operational State: Operational 00:43:33.302 Entry Latency: 16 microseconds 00:43:33.302 Exit Latency: 4 microseconds 00:43:33.302 Relative Read Throughput: 0 00:43:33.302 Relative Read Latency: 0 00:43:33.302 Relative Write Throughput: 0 00:43:33.302 Relative Write Latency: 0 00:43:33.302 Idle Power: Not Reported 00:43:33.302 Active Power: Not Reported 00:43:33.302 Non-Operational Permissive Mode: Not Supported 00:43:33.302 00:43:33.302 Health Information 00:43:33.302 ================== 00:43:33.302 Critical Warnings: 00:43:33.302 Available Spare Space: OK 00:43:33.302 Temperature: OK 00:43:33.302 Device Reliability: OK 00:43:33.302 Read Only: No 00:43:33.302 Volatile Memory Backup: OK 00:43:33.302 Current Temperature: 323 Kelvin (50 Celsius) 00:43:33.302 Temperature Threshold: 343 Kelvin (70 Celsius) 00:43:33.302 Available Spare: 0% 00:43:33.302 Available Spare Threshold: 0% 00:43:33.302 Life Percentage Used: 0% 00:43:33.302 Data Units Read: 3472 00:43:33.302 Data Units Written: 3137 00:43:33.302 Host Read Commands: 186939 00:43:33.302 Host Write Commands: 200031 00:43:33.302 Controller Busy Time: 0 minutes 00:43:33.302 Power Cycles: 0 00:43:33.302 Power On Hours: 0 hours 00:43:33.302 Unsafe Shutdowns: 0 00:43:33.302 Unrecoverable Media Errors: 0 00:43:33.302 Lifetime Error Log Entries: 0 00:43:33.302 Warning Temperature Time: 0 minutes 00:43:33.302 Critical Temperature Time: 0 minutes 00:43:33.302 00:43:33.302 Number of Queues 00:43:33.302 ================ 00:43:33.302 Number of I/O Submission Queues: 64 00:43:33.302 Number of I/O Completion Queues: 64 00:43:33.302 00:43:33.302 ZNS Specific Controller Data 00:43:33.302 ============================ 00:43:33.302 Zone Append Size Limit: 0 00:43:33.303 00:43:33.303 00:43:33.303 Active Namespaces 00:43:33.303 ================= 00:43:33.303 Namespace ID:1 00:43:33.303 Error Recovery Timeout: Unlimited 00:43:33.303 Command Set Identifier: NVM (00h) 00:43:33.303 Deallocate: Supported 00:43:33.303 Deallocated/Unwritten Error: Supported 00:43:33.303 Deallocated Read Value: All 0x00 00:43:33.303 Deallocate in Write Zeroes: Not Supported 00:43:33.303 Deallocated Guard Field: 0xFFFF 00:43:33.303 Flush: Supported 00:43:33.303 Reservation: Not Supported 00:43:33.303 Namespace Sharing Capabilities: Private 00:43:33.303 Size (in LBAs): 1310720 (5GiB) 00:43:33.303 Capacity (in LBAs): 1310720 (5GiB) 00:43:33.303 Utilization (in LBAs): 1310720 (5GiB) 00:43:33.303 Thin Provisioning: Not Supported 00:43:33.303 Per-NS Atomic Units: No 00:43:33.303 Maximum Single Source Range Length: 128 00:43:33.303 Maximum Copy Length: 128 00:43:33.303 Maximum Source Range Count: 128 00:43:33.303 NGUID/EUI64 Never Reused: No 00:43:33.303 Namespace Write Protected: No 00:43:33.303 Number of LBA Formats: 8 00:43:33.303 Current LBA Format: LBA Format #04 00:43:33.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:43:33.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:43:33.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:43:33.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:43:33.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:43:33.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:43:33.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:43:33.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:43:33.303 00:43:33.303 NVM Specific Namespace Data 00:43:33.303 =========================== 00:43:33.303 Logical Block Storage Tag Mask: 0 00:43:33.303 Protection Information Capabilities: 00:43:33.303 16b Guard Protection Information Storage Tag Support: No 00:43:33.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:43:33.303 Storage Tag Check Read Support: No 00:43:33.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.303 19:10:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:43:33.303 19:10:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:43:33.889 ===================================================== 00:43:33.889 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:33.889 ===================================================== 00:43:33.889 Controller Capabilities/Features 00:43:33.889 ================================ 00:43:33.889 Vendor ID: 1b36 00:43:33.889 Subsystem Vendor ID: 1af4 00:43:33.889 Serial Number: 12340 00:43:33.889 Model Number: QEMU NVMe Ctrl 00:43:33.889 Firmware Version: 8.0.0 00:43:33.889 Recommended Arb Burst: 6 00:43:33.889 IEEE OUI Identifier: 00 54 52 00:43:33.889 Multi-path I/O 00:43:33.889 May have multiple subsystem ports: No 00:43:33.889 May have multiple controllers: No 00:43:33.889 Associated with SR-IOV VF: No 00:43:33.889 Max Data Transfer Size: 524288 00:43:33.889 Max Number of Namespaces: 256 00:43:33.889 Max Number of I/O Queues: 64 00:43:33.889 NVMe Specification Version (VS): 1.4 00:43:33.889 NVMe Specification Version (Identify): 1.4 00:43:33.889 Maximum Queue Entries: 2048 00:43:33.889 Contiguous Queues Required: Yes 00:43:33.889 Arbitration Mechanisms Supported 00:43:33.889 Weighted Round Robin: Not Supported 00:43:33.889 Vendor Specific: Not Supported 00:43:33.889 Reset Timeout: 7500 ms 00:43:33.889 Doorbell Stride: 4 bytes 00:43:33.889 NVM Subsystem Reset: Not Supported 00:43:33.889 Command Sets Supported 00:43:33.889 NVM Command Set: Supported 00:43:33.889 Boot Partition: Not Supported 00:43:33.889 Memory Page Size Minimum: 4096 bytes 00:43:33.889 Memory Page Size Maximum: 65536 bytes 00:43:33.889 Persistent Memory Region: Not Supported 00:43:33.889 Optional Asynchronous Events Supported 00:43:33.889 Namespace Attribute Notices: Supported 00:43:33.889 Firmware Activation Notices: Not Supported 00:43:33.889 ANA Change Notices: Not Supported 00:43:33.889 PLE Aggregate Log Change Notices: Not Supported 00:43:33.889 LBA Status Info Alert Notices: Not Supported 00:43:33.889 EGE Aggregate Log Change Notices: Not Supported 00:43:33.889 Normal NVM Subsystem Shutdown event: Not Supported 00:43:33.889 Zone Descriptor Change Notices: Not Supported 00:43:33.889 Discovery Log Change Notices: Not Supported 00:43:33.889 Controller Attributes 00:43:33.889 128-bit Host Identifier: Not Supported 00:43:33.889 Non-Operational Permissive Mode: Not Supported 00:43:33.889 NVM Sets: Not Supported 00:43:33.889 Read Recovery Levels: Not Supported 00:43:33.889 Endurance Groups: Not Supported 00:43:33.889 Predictable Latency Mode: Not Supported 00:43:33.889 Traffic Based Keep ALive: Not Supported 00:43:33.889 Namespace Granularity: Not Supported 00:43:33.889 SQ Associations: Not Supported 00:43:33.889 UUID List: Not Supported 00:43:33.889 Multi-Domain Subsystem: Not Supported 00:43:33.889 Fixed Capacity Management: Not Supported 00:43:33.889 Variable Capacity Management: Not Supported 00:43:33.889 Delete Endurance Group: Not Supported 00:43:33.889 Delete NVM Set: Not Supported 00:43:33.889 Extended LBA Formats Supported: Supported 00:43:33.889 Flexible Data Placement Supported: Not Supported 00:43:33.889 00:43:33.889 Controller Memory Buffer Support 00:43:33.889 ================================ 00:43:33.889 Supported: No 00:43:33.889 00:43:33.889 Persistent Memory Region Support 00:43:33.889 ================================ 00:43:33.889 Supported: No 00:43:33.889 00:43:33.889 Admin Command Set Attributes 00:43:33.889 ============================ 00:43:33.889 Security Send/Receive: Not Supported 00:43:33.889 Format NVM: Supported 00:43:33.889 Firmware Activate/Download: Not Supported 00:43:33.889 Namespace Management: Supported 00:43:33.889 Device Self-Test: Not Supported 00:43:33.889 Directives: Supported 00:43:33.889 NVMe-MI: Not Supported 00:43:33.889 Virtualization Management: Not Supported 00:43:33.889 Doorbell Buffer Config: Supported 00:43:33.889 Get LBA Status Capability: Not Supported 00:43:33.889 Command & Feature Lockdown Capability: Not Supported 00:43:33.889 Abort Command Limit: 4 00:43:33.889 Async Event Request Limit: 4 00:43:33.889 Number of Firmware Slots: N/A 00:43:33.889 Firmware Slot 1 Read-Only: N/A 00:43:33.889 Firmware Activation Without Reset: N/A 00:43:33.889 Multiple Update Detection Support: N/A 00:43:33.889 Firmware Update Granularity: No Information Provided 00:43:33.889 Per-Namespace SMART Log: Yes 00:43:33.889 Asymmetric Namespace Access Log Page: Not Supported 00:43:33.889 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:43:33.889 Command Effects Log Page: Supported 00:43:33.889 Get Log Page Extended Data: Supported 00:43:33.889 Telemetry Log Pages: Not Supported 00:43:33.889 Persistent Event Log Pages: Not Supported 00:43:33.889 Supported Log Pages Log Page: May Support 00:43:33.889 Commands Supported & Effects Log Page: Not Supported 00:43:33.889 Feature Identifiers & Effects Log Page:May Support 00:43:33.889 NVMe-MI Commands & Effects Log Page: May Support 00:43:33.889 Data Area 4 for Telemetry Log: Not Supported 00:43:33.889 Error Log Page Entries Supported: 1 00:43:33.889 Keep Alive: Not Supported 00:43:33.889 00:43:33.889 NVM Command Set Attributes 00:43:33.889 ========================== 00:43:33.889 Submission Queue Entry Size 00:43:33.889 Max: 64 00:43:33.889 Min: 64 00:43:33.889 Completion Queue Entry Size 00:43:33.889 Max: 16 00:43:33.889 Min: 16 00:43:33.889 Number of Namespaces: 256 00:43:33.889 Compare Command: Supported 00:43:33.889 Write Uncorrectable Command: Not Supported 00:43:33.889 Dataset Management Command: Supported 00:43:33.889 Write Zeroes Command: Supported 00:43:33.889 Set Features Save Field: Supported 00:43:33.889 Reservations: Not Supported 00:43:33.889 Timestamp: Supported 00:43:33.889 Copy: Supported 00:43:33.889 Volatile Write Cache: Present 00:43:33.889 Atomic Write Unit (Normal): 1 00:43:33.889 Atomic Write Unit (PFail): 1 00:43:33.889 Atomic Compare & Write Unit: 1 00:43:33.889 Fused Compare & Write: Not Supported 00:43:33.889 Scatter-Gather List 00:43:33.889 SGL Command Set: Supported 00:43:33.889 SGL Keyed: Not Supported 00:43:33.889 SGL Bit Bucket Descriptor: Not Supported 00:43:33.889 SGL Metadata Pointer: Not Supported 00:43:33.889 Oversized SGL: Not Supported 00:43:33.889 SGL Metadata Address: Not Supported 00:43:33.889 SGL Offset: Not Supported 00:43:33.889 Transport SGL Data Block: Not Supported 00:43:33.889 Replay Protected Memory Block: Not Supported 00:43:33.889 00:43:33.889 Firmware Slot Information 00:43:33.889 ========================= 00:43:33.889 Active slot: 1 00:43:33.889 Slot 1 Firmware Revision: 1.0 00:43:33.889 00:43:33.889 00:43:33.889 Commands Supported and Effects 00:43:33.889 ============================== 00:43:33.889 Admin Commands 00:43:33.889 -------------- 00:43:33.889 Delete I/O Submission Queue (00h): Supported 00:43:33.889 Create I/O Submission Queue (01h): Supported 00:43:33.889 Get Log Page (02h): Supported 00:43:33.889 Delete I/O Completion Queue (04h): Supported 00:43:33.889 Create I/O Completion Queue (05h): Supported 00:43:33.889 Identify (06h): Supported 00:43:33.890 Abort (08h): Supported 00:43:33.890 Set Features (09h): Supported 00:43:33.890 Get Features (0Ah): Supported 00:43:33.890 Asynchronous Event Request (0Ch): Supported 00:43:33.890 Namespace Attachment (15h): Supported NS-Inventory-Change 00:43:33.890 Directive Send (19h): Supported 00:43:33.890 Directive Receive (1Ah): Supported 00:43:33.890 Virtualization Management (1Ch): Supported 00:43:33.890 Doorbell Buffer Config (7Ch): Supported 00:43:33.890 Format NVM (80h): Supported LBA-Change 00:43:33.890 I/O Commands 00:43:33.890 ------------ 00:43:33.890 Flush (00h): Supported LBA-Change 00:43:33.890 Write (01h): Supported LBA-Change 00:43:33.890 Read (02h): Supported 00:43:33.890 Compare (05h): Supported 00:43:33.890 Write Zeroes (08h): Supported LBA-Change 00:43:33.890 Dataset Management (09h): Supported LBA-Change 00:43:33.890 Unknown (0Ch): Supported 00:43:33.890 Unknown (12h): Supported 00:43:33.890 Copy (19h): Supported LBA-Change 00:43:33.890 Unknown (1Dh): Supported LBA-Change 00:43:33.890 00:43:33.890 Error Log 00:43:33.890 ========= 00:43:33.890 00:43:33.890 Arbitration 00:43:33.890 =========== 00:43:33.890 Arbitration Burst: no limit 00:43:33.890 00:43:33.890 Power Management 00:43:33.890 ================ 00:43:33.890 Number of Power States: 1 00:43:33.890 Current Power State: Power State #0 00:43:33.890 Power State #0: 00:43:33.890 Max Power: 25.00 W 00:43:33.890 Non-Operational State: Operational 00:43:33.890 Entry Latency: 16 microseconds 00:43:33.890 Exit Latency: 4 microseconds 00:43:33.890 Relative Read Throughput: 0 00:43:33.890 Relative Read Latency: 0 00:43:33.890 Relative Write Throughput: 0 00:43:33.890 Relative Write Latency: 0 00:43:33.890 Idle Power: Not Reported 00:43:33.890 Active Power: Not Reported 00:43:33.890 Non-Operational Permissive Mode: Not Supported 00:43:33.890 00:43:33.890 Health Information 00:43:33.890 ================== 00:43:33.890 Critical Warnings: 00:43:33.890 Available Spare Space: OK 00:43:33.890 Temperature: OK 00:43:33.890 Device Reliability: OK 00:43:33.890 Read Only: No 00:43:33.890 Volatile Memory Backup: OK 00:43:33.890 Current Temperature: 323 Kelvin (50 Celsius) 00:43:33.890 Temperature Threshold: 343 Kelvin (70 Celsius) 00:43:33.890 Available Spare: 0% 00:43:33.890 Available Spare Threshold: 0% 00:43:33.890 Life Percentage Used: 0% 00:43:33.890 Data Units Read: 3472 00:43:33.890 Data Units Written: 3137 00:43:33.890 Host Read Commands: 186939 00:43:33.890 Host Write Commands: 200031 00:43:33.890 Controller Busy Time: 0 minutes 00:43:33.890 Power Cycles: 0 00:43:33.890 Power On Hours: 0 hours 00:43:33.890 Unsafe Shutdowns: 0 00:43:33.890 Unrecoverable Media Errors: 0 00:43:33.890 Lifetime Error Log Entries: 0 00:43:33.890 Warning Temperature Time: 0 minutes 00:43:33.890 Critical Temperature Time: 0 minutes 00:43:33.890 00:43:33.890 Number of Queues 00:43:33.890 ================ 00:43:33.890 Number of I/O Submission Queues: 64 00:43:33.890 Number of I/O Completion Queues: 64 00:43:33.890 00:43:33.890 ZNS Specific Controller Data 00:43:33.890 ============================ 00:43:33.890 Zone Append Size Limit: 0 00:43:33.890 00:43:33.890 00:43:33.890 Active Namespaces 00:43:33.890 ================= 00:43:33.890 Namespace ID:1 00:43:33.890 Error Recovery Timeout: Unlimited 00:43:33.890 Command Set Identifier: NVM (00h) 00:43:33.890 Deallocate: Supported 00:43:33.890 Deallocated/Unwritten Error: Supported 00:43:33.890 Deallocated Read Value: All 0x00 00:43:33.890 Deallocate in Write Zeroes: Not Supported 00:43:33.890 Deallocated Guard Field: 0xFFFF 00:43:33.890 Flush: Supported 00:43:33.890 Reservation: Not Supported 00:43:33.890 Namespace Sharing Capabilities: Private 00:43:33.890 Size (in LBAs): 1310720 (5GiB) 00:43:33.890 Capacity (in LBAs): 1310720 (5GiB) 00:43:33.890 Utilization (in LBAs): 1310720 (5GiB) 00:43:33.890 Thin Provisioning: Not Supported 00:43:33.890 Per-NS Atomic Units: No 00:43:33.890 Maximum Single Source Range Length: 128 00:43:33.890 Maximum Copy Length: 128 00:43:33.890 Maximum Source Range Count: 128 00:43:33.890 NGUID/EUI64 Never Reused: No 00:43:33.890 Namespace Write Protected: No 00:43:33.890 Number of LBA Formats: 8 00:43:33.890 Current LBA Format: LBA Format #04 00:43:33.890 LBA Format #00: Data Size: 512 Metadata Size: 0 00:43:33.890 LBA Format #01: Data Size: 512 Metadata Size: 8 00:43:33.890 LBA Format #02: Data Size: 512 Metadata Size: 16 00:43:33.890 LBA Format #03: Data Size: 512 Metadata Size: 64 00:43:33.890 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:43:33.890 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:43:33.890 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:43:33.890 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:43:33.890 00:43:33.890 NVM Specific Namespace Data 00:43:33.890 =========================== 00:43:33.890 Logical Block Storage Tag Mask: 0 00:43:33.890 Protection Information Capabilities: 00:43:33.890 16b Guard Protection Information Storage Tag Support: No 00:43:33.890 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:43:33.890 Storage Tag Check Read Support: No 00:43:33.890 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:33.890 00:43:33.890 real 0m0.859s 00:43:33.890 user 0m0.367s 00:43:33.890 sys 0m0.394s 00:43:33.890 19:10:34 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:33.890 19:10:34 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:43:33.890 ************************************ 00:43:33.890 END TEST nvme_identify 00:43:33.890 ************************************ 00:43:33.890 19:10:34 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:43:33.890 19:10:34 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:33.890 19:10:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:33.890 19:10:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:33.890 ************************************ 00:43:33.890 START TEST nvme_perf 00:43:33.890 ************************************ 00:43:33.890 19:10:34 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:43:33.890 19:10:34 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:43:35.266 Initializing NVMe Controllers 00:43:35.266 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:35.266 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:35.266 Initialization complete. Launching workers. 00:43:35.266 ======================================================== 00:43:35.266 Latency(us) 00:43:35.266 Device Information : IOPS MiB/s Average min max 00:43:35.266 PCIE (0000:00:10.0) NSID 1 from core 0: 87881.13 1029.86 1454.52 769.64 9009.46 00:43:35.266 ======================================================== 00:43:35.266 Total : 87881.13 1029.86 1454.52 769.64 9009.46 00:43:35.266 00:43:35.266 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:35.266 ================================================================================= 00:43:35.266 1.00000% : 928.427us 00:43:35.266 10.00000% : 1061.059us 00:43:35.266 25.00000% : 1201.493us 00:43:35.266 50.00000% : 1427.749us 00:43:35.266 75.00000% : 1661.806us 00:43:35.266 90.00000% : 1825.646us 00:43:35.266 95.00000% : 1973.882us 00:43:35.266 98.00000% : 2137.722us 00:43:35.266 99.00000% : 2402.987us 00:43:35.266 99.50000% : 3136.366us 00:43:35.266 99.90000% : 5336.503us 00:43:35.266 99.99000% : 8613.303us 00:43:35.266 99.99900% : 9050.210us 00:43:35.266 99.99990% : 9050.210us 00:43:35.266 99.99999% : 9050.210us 00:43:35.266 00:43:35.266 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:35.266 ============================================================================== 00:43:35.266 Range in us Cumulative IO count 00:43:35.266 768.488 - 772.389: 0.0011% ( 1) 00:43:35.266 776.290 - 780.190: 0.0023% ( 1) 00:43:35.267 784.091 - 787.992: 0.0034% ( 1) 00:43:35.267 787.992 - 791.893: 0.0057% ( 2) 00:43:35.267 791.893 - 795.794: 0.0125% ( 6) 00:43:35.267 795.794 - 799.695: 0.0171% ( 4) 00:43:35.267 799.695 - 803.596: 0.0250% ( 7) 00:43:35.267 803.596 - 807.497: 0.0330% ( 7) 00:43:35.267 807.497 - 811.398: 0.0432% ( 9) 00:43:35.267 811.398 - 815.299: 0.0557% ( 11) 00:43:35.267 815.299 - 819.200: 0.0671% ( 10) 00:43:35.267 819.200 - 823.101: 0.0819% ( 13) 00:43:35.267 823.101 - 827.002: 0.0955% ( 12) 00:43:35.267 827.002 - 830.903: 0.1069% ( 10) 00:43:35.267 830.903 - 834.804: 0.1262% ( 17) 00:43:35.267 834.804 - 838.705: 0.1433% ( 15) 00:43:35.267 838.705 - 842.606: 0.1535% ( 9) 00:43:35.267 842.606 - 846.507: 0.1717% ( 16) 00:43:35.267 846.507 - 850.408: 0.1888% ( 15) 00:43:35.267 850.408 - 854.309: 0.1990% ( 9) 00:43:35.267 854.309 - 858.210: 0.2138% ( 13) 00:43:35.267 858.210 - 862.110: 0.2297% ( 14) 00:43:35.267 862.110 - 866.011: 0.2445% ( 13) 00:43:35.267 866.011 - 869.912: 0.2650% ( 18) 00:43:35.267 869.912 - 873.813: 0.2843% ( 17) 00:43:35.267 873.813 - 877.714: 0.3048% ( 18) 00:43:35.267 877.714 - 881.615: 0.3287% ( 21) 00:43:35.267 881.615 - 885.516: 0.3514% ( 20) 00:43:35.267 885.516 - 889.417: 0.3798% ( 25) 00:43:35.267 889.417 - 893.318: 0.4026% ( 20) 00:43:35.267 893.318 - 897.219: 0.4412% ( 34) 00:43:35.267 897.219 - 901.120: 0.4924% ( 45) 00:43:35.267 901.120 - 905.021: 0.5424% ( 44) 00:43:35.267 905.021 - 908.922: 0.5913% ( 43) 00:43:35.267 908.922 - 912.823: 0.6653% ( 65) 00:43:35.267 912.823 - 916.724: 0.7415% ( 67) 00:43:35.267 916.724 - 920.625: 0.8176% ( 67) 00:43:35.267 920.625 - 924.526: 0.9109% ( 82) 00:43:35.267 924.526 - 928.427: 1.0121% ( 89) 00:43:35.267 928.427 - 932.328: 1.1418% ( 114) 00:43:35.267 932.328 - 936.229: 1.2907% ( 131) 00:43:35.267 936.229 - 940.130: 1.4306% ( 123) 00:43:35.267 940.130 - 944.030: 1.5875% ( 138) 00:43:35.267 944.030 - 947.931: 1.7638% ( 155) 00:43:35.267 947.931 - 951.832: 1.9412% ( 156) 00:43:35.267 951.832 - 955.733: 2.1209% ( 158) 00:43:35.267 955.733 - 959.634: 2.3313% ( 185) 00:43:35.267 959.634 - 963.535: 2.5701% ( 210) 00:43:35.267 963.535 - 967.436: 2.8100% ( 211) 00:43:35.267 967.436 - 971.337: 3.0682% ( 227) 00:43:35.267 971.337 - 975.238: 3.3149% ( 217) 00:43:35.267 975.238 - 979.139: 3.5742% ( 228) 00:43:35.267 979.139 - 983.040: 3.8540% ( 246) 00:43:35.267 983.040 - 986.941: 4.1690% ( 277) 00:43:35.267 986.941 - 990.842: 4.4522% ( 249) 00:43:35.267 990.842 - 994.743: 4.7376% ( 251) 00:43:35.267 994.743 - 998.644: 5.0662% ( 289) 00:43:35.267 998.644 - 1006.446: 5.6928% ( 551) 00:43:35.267 1006.446 - 1014.248: 6.3092% ( 542) 00:43:35.267 1014.248 - 1022.050: 6.9290% ( 545) 00:43:35.267 1022.050 - 1029.851: 7.6090% ( 598) 00:43:35.267 1029.851 - 1037.653: 8.3152% ( 621) 00:43:35.267 1037.653 - 1045.455: 9.0419% ( 639) 00:43:35.267 1045.455 - 1053.257: 9.7288% ( 604) 00:43:35.267 1053.257 - 1061.059: 10.4634% ( 646) 00:43:35.267 1061.059 - 1068.861: 11.2219% ( 667) 00:43:35.267 1068.861 - 1076.663: 11.9918% ( 677) 00:43:35.267 1076.663 - 1084.465: 12.8106% ( 720) 00:43:35.267 1084.465 - 1092.267: 13.5884% ( 684) 00:43:35.267 1092.267 - 1100.069: 14.4106% ( 723) 00:43:35.267 1100.069 - 1107.870: 15.2726% ( 758) 00:43:35.267 1107.870 - 1115.672: 16.0801% ( 710) 00:43:35.267 1115.672 - 1123.474: 16.9671% ( 780) 00:43:35.267 1123.474 - 1131.276: 17.8154% ( 746) 00:43:35.267 1131.276 - 1139.078: 18.6774% ( 758) 00:43:35.267 1139.078 - 1146.880: 19.5099% ( 732) 00:43:35.267 1146.880 - 1154.682: 20.3969% ( 780) 00:43:35.267 1154.682 - 1162.484: 21.2862% ( 782) 00:43:35.267 1162.484 - 1170.286: 22.1197% ( 733) 00:43:35.267 1170.286 - 1178.088: 22.9988% ( 773) 00:43:35.267 1178.088 - 1185.890: 23.8676% ( 764) 00:43:35.267 1185.890 - 1193.691: 24.7467% ( 773) 00:43:35.267 1193.691 - 1201.493: 25.6337% ( 780) 00:43:35.267 1201.493 - 1209.295: 26.5037% ( 765) 00:43:35.267 1209.295 - 1217.097: 27.3657% ( 758) 00:43:35.267 1217.097 - 1224.899: 28.2231% ( 754) 00:43:35.267 1224.899 - 1232.701: 29.1056% ( 776) 00:43:35.267 1232.701 - 1240.503: 29.9778% ( 767) 00:43:35.267 1240.503 - 1248.305: 30.8330% ( 752) 00:43:35.267 1248.305 - 1256.107: 31.6984% ( 761) 00:43:35.267 1256.107 - 1263.909: 32.5490% ( 748) 00:43:35.267 1263.909 - 1271.710: 33.3906% ( 740) 00:43:35.267 1271.710 - 1279.512: 34.2469% ( 753) 00:43:35.267 1279.512 - 1287.314: 35.0793% ( 732) 00:43:35.267 1287.314 - 1295.116: 35.9231% ( 742) 00:43:35.267 1295.116 - 1302.918: 36.7976% ( 769) 00:43:35.267 1302.918 - 1310.720: 37.6016% ( 707) 00:43:35.267 1310.720 - 1318.522: 38.5000% ( 790) 00:43:35.267 1318.522 - 1326.324: 39.3018% ( 705) 00:43:35.267 1326.324 - 1334.126: 40.1763% ( 769) 00:43:35.267 1334.126 - 1341.928: 41.0019% ( 726) 00:43:35.267 1341.928 - 1349.730: 41.8263% ( 725) 00:43:35.267 1349.730 - 1357.531: 42.7043% ( 772) 00:43:35.267 1357.531 - 1365.333: 43.5162% ( 714) 00:43:35.267 1365.333 - 1373.135: 44.3794% ( 759) 00:43:35.267 1373.135 - 1380.937: 45.2004% ( 722) 00:43:35.267 1380.937 - 1388.739: 46.0374% ( 736) 00:43:35.267 1388.739 - 1396.541: 46.9153% ( 772) 00:43:35.267 1396.541 - 1404.343: 47.7444% ( 729) 00:43:35.267 1404.343 - 1412.145: 48.5984% ( 751) 00:43:35.267 1412.145 - 1419.947: 49.4274% ( 729) 00:43:35.267 1419.947 - 1427.749: 50.2587% ( 731) 00:43:35.267 1427.749 - 1435.550: 51.0900% ( 731) 00:43:35.267 1435.550 - 1443.352: 51.9554% ( 761) 00:43:35.267 1443.352 - 1451.154: 52.7810% ( 726) 00:43:35.267 1451.154 - 1458.956: 53.6373% ( 753) 00:43:35.267 1458.956 - 1466.758: 54.4357% ( 702) 00:43:35.267 1466.758 - 1474.560: 55.3147% ( 773) 00:43:35.267 1474.560 - 1482.362: 56.1472% ( 732) 00:43:35.267 1482.362 - 1490.164: 56.9887% ( 740) 00:43:35.267 1490.164 - 1497.966: 57.8302% ( 740) 00:43:35.267 1497.966 - 1505.768: 58.6365% ( 709) 00:43:35.267 1505.768 - 1513.570: 59.5212% ( 778) 00:43:35.267 1513.570 - 1521.371: 60.3423% ( 722) 00:43:35.267 1521.371 - 1529.173: 61.2134% ( 766) 00:43:35.267 1529.173 - 1536.975: 62.0549% ( 740) 00:43:35.267 1536.975 - 1544.777: 62.8783% ( 724) 00:43:35.267 1544.777 - 1552.579: 63.7266% ( 746) 00:43:35.267 1552.579 - 1560.381: 64.5409% ( 716) 00:43:35.267 1560.381 - 1568.183: 65.4199% ( 773) 00:43:35.267 1568.183 - 1575.985: 66.2546% ( 734) 00:43:35.267 1575.985 - 1583.787: 67.1132% ( 755) 00:43:35.267 1583.787 - 1591.589: 67.9320% ( 720) 00:43:35.267 1591.589 - 1599.390: 68.7781% ( 744) 00:43:35.267 1599.390 - 1607.192: 69.6435% ( 761) 00:43:35.267 1607.192 - 1614.994: 70.4816% ( 737) 00:43:35.267 1614.994 - 1622.796: 71.3277% ( 744) 00:43:35.267 1622.796 - 1630.598: 72.1522% ( 725) 00:43:35.267 1630.598 - 1638.400: 73.0233% ( 766) 00:43:35.267 1638.400 - 1646.202: 73.8466% ( 724) 00:43:35.267 1646.202 - 1654.004: 74.7018% ( 752) 00:43:35.267 1654.004 - 1661.806: 75.5456% ( 742) 00:43:35.267 1661.806 - 1669.608: 76.3894% ( 742) 00:43:35.267 1669.608 - 1677.410: 77.2343% ( 743) 00:43:35.267 1677.410 - 1685.211: 78.0918% ( 754) 00:43:35.267 1685.211 - 1693.013: 78.9071% ( 717) 00:43:35.267 1693.013 - 1700.815: 79.7589% ( 749) 00:43:35.267 1700.815 - 1708.617: 80.5948% ( 735) 00:43:35.267 1708.617 - 1716.419: 81.4317% ( 736) 00:43:35.267 1716.419 - 1724.221: 82.2676% ( 735) 00:43:35.267 1724.221 - 1732.023: 83.0591% ( 696) 00:43:35.267 1732.023 - 1739.825: 83.8472% ( 693) 00:43:35.267 1739.825 - 1747.627: 84.5977% ( 660) 00:43:35.267 1747.627 - 1755.429: 85.3051% ( 622) 00:43:35.267 1755.429 - 1763.230: 86.0101% ( 620) 00:43:35.267 1763.230 - 1771.032: 86.6185% ( 535) 00:43:35.267 1771.032 - 1778.834: 87.2417% ( 548) 00:43:35.267 1778.834 - 1786.636: 87.7773% ( 471) 00:43:35.267 1786.636 - 1794.438: 88.3277% ( 484) 00:43:35.267 1794.438 - 1802.240: 88.8088% ( 423) 00:43:35.267 1802.240 - 1810.042: 89.2489% ( 387) 00:43:35.267 1810.042 - 1817.844: 89.6844% ( 383) 00:43:35.267 1817.844 - 1825.646: 90.0733% ( 342) 00:43:35.267 1825.646 - 1833.448: 90.4645% ( 344) 00:43:35.267 1833.448 - 1841.250: 90.8307% ( 322) 00:43:35.267 1841.250 - 1849.051: 91.1821% ( 309) 00:43:35.267 1849.051 - 1856.853: 91.5267% ( 303) 00:43:35.267 1856.853 - 1864.655: 91.8497% ( 284) 00:43:35.267 1864.655 - 1872.457: 92.1590% ( 272) 00:43:35.267 1872.457 - 1880.259: 92.4558% ( 261) 00:43:35.267 1880.259 - 1888.061: 92.7378% ( 248) 00:43:35.267 1888.061 - 1895.863: 93.0153% ( 244) 00:43:35.267 1895.863 - 1903.665: 93.2621% ( 217) 00:43:35.267 1903.665 - 1911.467: 93.5009% ( 210) 00:43:35.267 1911.467 - 1919.269: 93.7249% ( 197) 00:43:35.267 1919.269 - 1927.070: 93.9433% ( 192) 00:43:35.267 1927.070 - 1934.872: 94.1423% ( 175) 00:43:35.267 1934.872 - 1942.674: 94.3367% ( 171) 00:43:35.267 1942.674 - 1950.476: 94.5301% ( 170) 00:43:35.267 1950.476 - 1958.278: 94.6961% ( 146) 00:43:35.267 1958.278 - 1966.080: 94.8735% ( 156) 00:43:35.267 1966.080 - 1973.882: 95.0407% ( 147) 00:43:35.267 1973.882 - 1981.684: 95.1965% ( 137) 00:43:35.267 1981.684 - 1989.486: 95.3613% ( 145) 00:43:35.268 1989.486 - 1997.288: 95.5228% ( 142) 00:43:35.268 1997.288 - 2012.891: 95.8367% ( 276) 00:43:35.268 2012.891 - 2028.495: 96.1403% ( 267) 00:43:35.268 2028.495 - 2044.099: 96.4519% ( 274) 00:43:35.268 2044.099 - 2059.703: 96.7499% ( 262) 00:43:35.268 2059.703 - 2075.307: 97.0558% ( 269) 00:43:35.268 2075.307 - 2090.910: 97.3367% ( 247) 00:43:35.268 2090.910 - 2106.514: 97.6005% ( 232) 00:43:35.268 2106.514 - 2122.118: 97.8382% ( 209) 00:43:35.268 2122.118 - 2137.722: 98.0395% ( 177) 00:43:35.268 2137.722 - 2153.326: 98.2112% ( 151) 00:43:35.268 2153.326 - 2168.930: 98.3488% ( 121) 00:43:35.268 2168.930 - 2184.533: 98.4523% ( 91) 00:43:35.268 2184.533 - 2200.137: 98.5444% ( 81) 00:43:35.268 2200.137 - 2215.741: 98.6194% ( 66) 00:43:35.268 2215.741 - 2231.345: 98.6831% ( 56) 00:43:35.268 2231.345 - 2246.949: 98.7400% ( 50) 00:43:35.268 2246.949 - 2262.552: 98.7843% ( 39) 00:43:35.268 2262.552 - 2278.156: 98.8219% ( 33) 00:43:35.268 2278.156 - 2293.760: 98.8537% ( 28) 00:43:35.268 2293.760 - 2309.364: 98.8799% ( 23) 00:43:35.268 2309.364 - 2324.968: 98.9026% ( 20) 00:43:35.268 2324.968 - 2340.571: 98.9253% ( 20) 00:43:35.268 2340.571 - 2356.175: 98.9504% ( 22) 00:43:35.268 2356.175 - 2371.779: 98.9720% ( 19) 00:43:35.268 2371.779 - 2387.383: 98.9868% ( 13) 00:43:35.268 2387.383 - 2402.987: 99.0038% ( 15) 00:43:35.268 2402.987 - 2418.590: 99.0220% ( 16) 00:43:35.268 2418.590 - 2434.194: 99.0368% ( 13) 00:43:35.268 2434.194 - 2449.798: 99.0516% ( 13) 00:43:35.268 2449.798 - 2465.402: 99.0652% ( 12) 00:43:35.268 2465.402 - 2481.006: 99.0766% ( 10) 00:43:35.268 2481.006 - 2496.610: 99.0902% ( 12) 00:43:35.268 2496.610 - 2512.213: 99.1039% ( 12) 00:43:35.268 2512.213 - 2527.817: 99.1187% ( 13) 00:43:35.268 2527.817 - 2543.421: 99.1312% ( 11) 00:43:35.268 2543.421 - 2559.025: 99.1448% ( 12) 00:43:35.268 2559.025 - 2574.629: 99.1596% ( 13) 00:43:35.268 2574.629 - 2590.232: 99.1744% ( 13) 00:43:35.268 2590.232 - 2605.836: 99.1892% ( 13) 00:43:35.268 2605.836 - 2621.440: 99.2040% ( 13) 00:43:35.268 2621.440 - 2637.044: 99.2210% ( 15) 00:43:35.268 2637.044 - 2652.648: 99.2369% ( 14) 00:43:35.268 2652.648 - 2668.251: 99.2517% ( 13) 00:43:35.268 2668.251 - 2683.855: 99.2654% ( 12) 00:43:35.268 2683.855 - 2699.459: 99.2802% ( 13) 00:43:35.268 2699.459 - 2715.063: 99.2938% ( 12) 00:43:35.268 2715.063 - 2730.667: 99.3063% ( 11) 00:43:35.268 2730.667 - 2746.270: 99.3165% ( 9) 00:43:35.268 2746.270 - 2761.874: 99.3279% ( 10) 00:43:35.268 2761.874 - 2777.478: 99.3404% ( 11) 00:43:35.268 2777.478 - 2793.082: 99.3495% ( 8) 00:43:35.268 2793.082 - 2808.686: 99.3575% ( 7) 00:43:35.268 2808.686 - 2824.290: 99.3666% ( 8) 00:43:35.268 2824.290 - 2839.893: 99.3734% ( 6) 00:43:35.268 2839.893 - 2855.497: 99.3814% ( 7) 00:43:35.268 2855.497 - 2871.101: 99.3893% ( 7) 00:43:35.268 2871.101 - 2886.705: 99.3984% ( 8) 00:43:35.268 2886.705 - 2902.309: 99.4041% ( 5) 00:43:35.268 2902.309 - 2917.912: 99.4098% ( 5) 00:43:35.268 2917.912 - 2933.516: 99.4178% ( 7) 00:43:35.268 2933.516 - 2949.120: 99.4246% ( 6) 00:43:35.268 2949.120 - 2964.724: 99.4325% ( 7) 00:43:35.268 2964.724 - 2980.328: 99.4394% ( 6) 00:43:35.268 2980.328 - 2995.931: 99.4462% ( 6) 00:43:35.268 2995.931 - 3011.535: 99.4530% ( 6) 00:43:35.268 3011.535 - 3027.139: 99.4610% ( 7) 00:43:35.268 3027.139 - 3042.743: 99.4667% ( 5) 00:43:35.268 3042.743 - 3058.347: 99.4723% ( 5) 00:43:35.268 3058.347 - 3073.950: 99.4792% ( 6) 00:43:35.268 3073.950 - 3089.554: 99.4860% ( 6) 00:43:35.268 3089.554 - 3105.158: 99.4928% ( 6) 00:43:35.268 3105.158 - 3120.762: 99.4985% ( 5) 00:43:35.268 3120.762 - 3136.366: 99.5030% ( 4) 00:43:35.268 3136.366 - 3151.970: 99.5110% ( 7) 00:43:35.268 3151.970 - 3167.573: 99.5178% ( 6) 00:43:35.268 3167.573 - 3183.177: 99.5235% ( 5) 00:43:35.268 3183.177 - 3198.781: 99.5292% ( 5) 00:43:35.268 3198.781 - 3214.385: 99.5337% ( 4) 00:43:35.268 3214.385 - 3229.989: 99.5394% ( 5) 00:43:35.268 3229.989 - 3245.592: 99.5440% ( 4) 00:43:35.268 3245.592 - 3261.196: 99.5508% ( 6) 00:43:35.268 3261.196 - 3276.800: 99.5565% ( 5) 00:43:35.268 3276.800 - 3292.404: 99.5622% ( 5) 00:43:35.268 3292.404 - 3308.008: 99.5656% ( 3) 00:43:35.268 3308.008 - 3323.611: 99.5701% ( 4) 00:43:35.268 3323.611 - 3339.215: 99.5770% ( 6) 00:43:35.268 3339.215 - 3354.819: 99.5826% ( 5) 00:43:35.268 3354.819 - 3370.423: 99.5872% ( 4) 00:43:35.268 3370.423 - 3386.027: 99.5917% ( 4) 00:43:35.268 3386.027 - 3401.630: 99.5963% ( 4) 00:43:35.268 3401.630 - 3417.234: 99.6020% ( 5) 00:43:35.268 3417.234 - 3432.838: 99.6065% ( 4) 00:43:35.268 3432.838 - 3448.442: 99.6122% ( 5) 00:43:35.268 3448.442 - 3464.046: 99.6168% ( 4) 00:43:35.268 3464.046 - 3479.650: 99.6236% ( 6) 00:43:35.268 3479.650 - 3495.253: 99.6293% ( 5) 00:43:35.268 3495.253 - 3510.857: 99.6327% ( 3) 00:43:35.268 3510.857 - 3526.461: 99.6384% ( 5) 00:43:35.268 3526.461 - 3542.065: 99.6441% ( 5) 00:43:35.268 3542.065 - 3557.669: 99.6475% ( 3) 00:43:35.268 3557.669 - 3573.272: 99.6532% ( 5) 00:43:35.268 3573.272 - 3588.876: 99.6577% ( 4) 00:43:35.268 3588.876 - 3604.480: 99.6611% ( 3) 00:43:35.268 3604.480 - 3620.084: 99.6623% ( 1) 00:43:35.268 3620.084 - 3635.688: 99.6657% ( 3) 00:43:35.268 3635.688 - 3651.291: 99.6691% ( 3) 00:43:35.268 3651.291 - 3666.895: 99.6736% ( 4) 00:43:35.268 3666.895 - 3682.499: 99.6793% ( 5) 00:43:35.268 3682.499 - 3698.103: 99.6839% ( 4) 00:43:35.268 3698.103 - 3713.707: 99.6861% ( 2) 00:43:35.268 3713.707 - 3729.310: 99.6895% ( 3) 00:43:35.268 3729.310 - 3744.914: 99.6930% ( 3) 00:43:35.268 3744.914 - 3760.518: 99.6952% ( 2) 00:43:35.268 3760.518 - 3776.122: 99.6998% ( 4) 00:43:35.268 3776.122 - 3791.726: 99.7032% ( 3) 00:43:35.268 3791.726 - 3807.330: 99.7066% ( 3) 00:43:35.268 3807.330 - 3822.933: 99.7100% ( 3) 00:43:35.268 3822.933 - 3838.537: 99.7134% ( 3) 00:43:35.268 3838.537 - 3854.141: 99.7180% ( 4) 00:43:35.268 3854.141 - 3869.745: 99.7214% ( 3) 00:43:35.268 3869.745 - 3885.349: 99.7248% ( 3) 00:43:35.268 3885.349 - 3900.952: 99.7271% ( 2) 00:43:35.268 3900.952 - 3916.556: 99.7293% ( 2) 00:43:35.268 3916.556 - 3932.160: 99.7328% ( 3) 00:43:35.268 3932.160 - 3947.764: 99.7362% ( 3) 00:43:35.268 3947.764 - 3963.368: 99.7396% ( 3) 00:43:35.268 3963.368 - 3978.971: 99.7430% ( 3) 00:43:35.268 3978.971 - 3994.575: 99.7475% ( 4) 00:43:35.268 3994.575 - 4025.783: 99.7532% ( 5) 00:43:35.268 4025.783 - 4056.990: 99.7589% ( 5) 00:43:35.268 4056.990 - 4088.198: 99.7669% ( 7) 00:43:35.268 4088.198 - 4119.406: 99.7726% ( 5) 00:43:35.268 4119.406 - 4150.613: 99.7782% ( 5) 00:43:35.268 4150.613 - 4181.821: 99.7839% ( 5) 00:43:35.268 4181.821 - 4213.029: 99.7896% ( 5) 00:43:35.268 4213.029 - 4244.236: 99.7953% ( 5) 00:43:35.268 4244.236 - 4275.444: 99.7987% ( 3) 00:43:35.268 4275.444 - 4306.651: 99.8033% ( 4) 00:43:35.268 4306.651 - 4337.859: 99.8089% ( 5) 00:43:35.268 4337.859 - 4369.067: 99.8146% ( 5) 00:43:35.268 4369.067 - 4400.274: 99.8203% ( 5) 00:43:35.268 4400.274 - 4431.482: 99.8237% ( 3) 00:43:35.268 4431.482 - 4462.690: 99.8283% ( 4) 00:43:35.268 4462.690 - 4493.897: 99.8340% ( 5) 00:43:35.268 4493.897 - 4525.105: 99.8385% ( 4) 00:43:35.268 4525.105 - 4556.312: 99.8419% ( 3) 00:43:35.268 4556.312 - 4587.520: 99.8476% ( 5) 00:43:35.268 4587.520 - 4618.728: 99.8522% ( 4) 00:43:35.268 4618.728 - 4649.935: 99.8578% ( 5) 00:43:35.268 4649.935 - 4681.143: 99.8624% ( 4) 00:43:35.268 4681.143 - 4712.350: 99.8658% ( 3) 00:43:35.268 4712.350 - 4743.558: 99.8715% ( 5) 00:43:35.268 4743.558 - 4774.766: 99.8760% ( 4) 00:43:35.268 4774.766 - 4805.973: 99.8795% ( 3) 00:43:35.268 4805.973 - 4837.181: 99.8806% ( 1) 00:43:35.268 4837.181 - 4868.389: 99.8840% ( 3) 00:43:35.268 4868.389 - 4899.596: 99.8874% ( 3) 00:43:35.268 4899.596 - 4930.804: 99.8897% ( 2) 00:43:35.268 4962.011 - 4993.219: 99.8908% ( 1) 00:43:35.268 4993.219 - 5024.427: 99.8920% ( 1) 00:43:35.268 5055.634 - 5086.842: 99.8931% ( 1) 00:43:35.268 5086.842 - 5118.050: 99.8942% ( 1) 00:43:35.268 5118.050 - 5149.257: 99.8954% ( 1) 00:43:35.268 5149.257 - 5180.465: 99.8965% ( 1) 00:43:35.268 5180.465 - 5211.672: 99.8977% ( 1) 00:43:35.268 5242.880 - 5274.088: 99.8988% ( 1) 00:43:35.268 5274.088 - 5305.295: 99.8999% ( 1) 00:43:35.268 5305.295 - 5336.503: 99.9011% ( 1) 00:43:35.268 5336.503 - 5367.710: 99.9022% ( 1) 00:43:35.268 5398.918 - 5430.126: 99.9033% ( 1) 00:43:35.268 5430.126 - 5461.333: 99.9045% ( 1) 00:43:35.268 5492.541 - 5523.749: 99.9056% ( 1) 00:43:35.268 5523.749 - 5554.956: 99.9067% ( 1) 00:43:35.268 5554.956 - 5586.164: 99.9079% ( 1) 00:43:35.268 5586.164 - 5617.371: 99.9090% ( 1) 00:43:35.268 5648.579 - 5679.787: 99.9102% ( 1) 00:43:35.268 5679.787 - 5710.994: 99.9113% ( 1) 00:43:35.268 5742.202 - 5773.410: 99.9124% ( 1) 00:43:35.268 5773.410 - 5804.617: 99.9136% ( 1) 00:43:35.268 5835.825 - 5867.032: 99.9147% ( 1) 00:43:35.268 5867.032 - 5898.240: 99.9158% ( 1) 00:43:35.268 5898.240 - 5929.448: 99.9170% ( 1) 00:43:35.268 5960.655 - 5991.863: 99.9181% ( 1) 00:43:35.268 5991.863 - 6023.070: 99.9193% ( 1) 00:43:35.269 6023.070 - 6054.278: 99.9204% ( 1) 00:43:35.269 6085.486 - 6116.693: 99.9215% ( 1) 00:43:35.269 6116.693 - 6147.901: 99.9227% ( 1) 00:43:35.269 6179.109 - 6210.316: 99.9238% ( 1) 00:43:35.269 6210.316 - 6241.524: 99.9249% ( 1) 00:43:35.269 6241.524 - 6272.731: 99.9261% ( 1) 00:43:35.269 6303.939 - 6335.147: 99.9272% ( 1) 00:43:35.269 6366.354 - 6397.562: 99.9284% ( 1) 00:43:35.269 6397.562 - 6428.770: 99.9295% ( 1) 00:43:35.269 6428.770 - 6459.977: 99.9306% ( 1) 00:43:35.269 6459.977 - 6491.185: 99.9318% ( 1) 00:43:35.269 6491.185 - 6522.392: 99.9329% ( 1) 00:43:35.269 6553.600 - 6584.808: 99.9340% ( 1) 00:43:35.269 6584.808 - 6616.015: 99.9352% ( 1) 00:43:35.269 6616.015 - 6647.223: 99.9363% ( 1) 00:43:35.269 6678.430 - 6709.638: 99.9375% ( 1) 00:43:35.269 6709.638 - 6740.846: 99.9386% ( 1) 00:43:35.269 6772.053 - 6803.261: 99.9397% ( 1) 00:43:35.269 6803.261 - 6834.469: 99.9409% ( 1) 00:43:35.269 6834.469 - 6865.676: 99.9420% ( 1) 00:43:35.269 6865.676 - 6896.884: 99.9431% ( 1) 00:43:35.269 6928.091 - 6959.299: 99.9443% ( 1) 00:43:35.269 6959.299 - 6990.507: 99.9454% ( 1) 00:43:35.269 6990.507 - 7021.714: 99.9466% ( 1) 00:43:35.269 7021.714 - 7052.922: 99.9477% ( 1) 00:43:35.269 7052.922 - 7084.130: 99.9488% ( 1) 00:43:35.269 7115.337 - 7146.545: 99.9500% ( 1) 00:43:35.269 7177.752 - 7208.960: 99.9511% ( 1) 00:43:35.269 7208.960 - 7240.168: 99.9522% ( 1) 00:43:35.269 7240.168 - 7271.375: 99.9534% ( 1) 00:43:35.269 7302.583 - 7333.790: 99.9545% ( 1) 00:43:35.269 7333.790 - 7364.998: 99.9556% ( 1) 00:43:35.269 7364.998 - 7396.206: 99.9568% ( 1) 00:43:35.269 7396.206 - 7427.413: 99.9579% ( 1) 00:43:35.269 7458.621 - 7489.829: 99.9591% ( 1) 00:43:35.269 7489.829 - 7521.036: 99.9602% ( 1) 00:43:35.269 7521.036 - 7552.244: 99.9613% ( 1) 00:43:35.269 7552.244 - 7583.451: 99.9625% ( 1) 00:43:35.269 7614.659 - 7645.867: 99.9636% ( 1) 00:43:35.269 7645.867 - 7677.074: 99.9647% ( 1) 00:43:35.269 7708.282 - 7739.490: 99.9659% ( 1) 00:43:35.269 7739.490 - 7770.697: 99.9670% ( 1) 00:43:35.269 7770.697 - 7801.905: 99.9682% ( 1) 00:43:35.269 7801.905 - 7833.112: 99.9693% ( 1) 00:43:35.269 7833.112 - 7864.320: 99.9704% ( 1) 00:43:35.269 7895.528 - 7926.735: 99.9716% ( 1) 00:43:35.269 7926.735 - 7957.943: 99.9727% ( 1) 00:43:35.269 7957.943 - 7989.150: 99.9738% ( 1) 00:43:35.269 7989.150 - 8051.566: 99.9750% ( 1) 00:43:35.269 8051.566 - 8113.981: 99.9773% ( 2) 00:43:35.269 8113.981 - 8176.396: 99.9784% ( 1) 00:43:35.269 8176.396 - 8238.811: 99.9807% ( 2) 00:43:35.269 8238.811 - 8301.227: 99.9829% ( 2) 00:43:35.269 8301.227 - 8363.642: 99.9841% ( 1) 00:43:35.269 8363.642 - 8426.057: 99.9864% ( 2) 00:43:35.269 8426.057 - 8488.472: 99.9875% ( 1) 00:43:35.269 8488.472 - 8550.888: 99.9898% ( 2) 00:43:35.269 8550.888 - 8613.303: 99.9920% ( 2) 00:43:35.269 8613.303 - 8675.718: 99.9932% ( 1) 00:43:35.269 8675.718 - 8738.133: 99.9943% ( 1) 00:43:35.269 8738.133 - 8800.549: 99.9966% ( 2) 00:43:35.269 8800.549 - 8862.964: 99.9977% ( 1) 00:43:35.269 8862.964 - 8925.379: 99.9989% ( 1) 00:43:35.269 8987.794 - 9050.210: 100.0000% ( 1) 00:43:35.269 00:43:35.269 19:10:35 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:43:36.645 Initializing NVMe Controllers 00:43:36.645 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:36.645 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:36.645 Initialization complete. Launching workers. 00:43:36.645 ======================================================== 00:43:36.645 Latency(us) 00:43:36.645 Device Information : IOPS MiB/s Average min max 00:43:36.645 PCIE (0000:00:10.0) NSID 1 from core 0: 78043.97 914.58 1639.69 578.64 9981.16 00:43:36.645 ======================================================== 00:43:36.645 Total : 78043.97 914.58 1639.69 578.64 9981.16 00:43:36.645 00:43:36.645 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:36.645 ================================================================================= 00:43:36.645 1.00000% : 1061.059us 00:43:36.645 10.00000% : 1302.918us 00:43:36.645 25.00000% : 1412.145us 00:43:36.645 50.00000% : 1544.777us 00:43:36.645 75.00000% : 1747.627us 00:43:36.645 90.00000% : 2028.495us 00:43:36.645 95.00000% : 2340.571us 00:43:36.645 98.00000% : 3089.554us 00:43:36.645 99.00000% : 3464.046us 00:43:36.645 99.50000% : 3854.141us 00:43:36.645 99.90000% : 5305.295us 00:43:36.645 99.99000% : 6834.469us 00:43:36.645 99.99900% : 9986.438us 00:43:36.645 99.99990% : 9986.438us 00:43:36.645 99.99999% : 9986.438us 00:43:36.645 00:43:36.645 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:36.645 ============================================================================== 00:43:36.645 Range in us Cumulative IO count 00:43:36.645 577.341 - 581.242: 0.0013% ( 1) 00:43:36.645 635.855 - 639.756: 0.0026% ( 1) 00:43:36.645 647.558 - 651.459: 0.0038% ( 1) 00:43:36.645 651.459 - 655.360: 0.0051% ( 1) 00:43:36.645 655.360 - 659.261: 0.0064% ( 1) 00:43:36.645 670.964 - 674.865: 0.0077% ( 1) 00:43:36.646 674.865 - 678.766: 0.0102% ( 2) 00:43:36.646 678.766 - 682.667: 0.0115% ( 1) 00:43:36.646 698.270 - 702.171: 0.0154% ( 3) 00:43:36.646 706.072 - 709.973: 0.0167% ( 1) 00:43:36.646 717.775 - 721.676: 0.0179% ( 1) 00:43:36.646 725.577 - 729.478: 0.0192% ( 1) 00:43:36.646 729.478 - 733.379: 0.0205% ( 1) 00:43:36.646 733.379 - 737.280: 0.0243% ( 3) 00:43:36.646 741.181 - 745.082: 0.0256% ( 1) 00:43:36.646 745.082 - 748.983: 0.0269% ( 1) 00:43:36.646 752.884 - 756.785: 0.0282% ( 1) 00:43:36.646 756.785 - 760.686: 0.0320% ( 3) 00:43:36.646 764.587 - 768.488: 0.0333% ( 1) 00:43:36.646 768.488 - 772.389: 0.0346% ( 1) 00:43:36.646 772.389 - 776.290: 0.0359% ( 1) 00:43:36.646 780.190 - 784.091: 0.0410% ( 4) 00:43:36.646 787.992 - 791.893: 0.0423% ( 1) 00:43:36.646 791.893 - 795.794: 0.0448% ( 2) 00:43:36.646 799.695 - 803.596: 0.0487% ( 3) 00:43:36.646 803.596 - 807.497: 0.0500% ( 1) 00:43:36.646 807.497 - 811.398: 0.0525% ( 2) 00:43:36.646 811.398 - 815.299: 0.0551% ( 2) 00:43:36.646 815.299 - 819.200: 0.0576% ( 2) 00:43:36.646 819.200 - 823.101: 0.0615% ( 3) 00:43:36.646 823.101 - 827.002: 0.0653% ( 3) 00:43:36.646 830.903 - 834.804: 0.0666% ( 1) 00:43:36.646 834.804 - 838.705: 0.0730% ( 5) 00:43:36.646 838.705 - 842.606: 0.0743% ( 1) 00:43:36.646 842.606 - 846.507: 0.0794% ( 4) 00:43:36.646 846.507 - 850.408: 0.0845% ( 4) 00:43:36.646 850.408 - 854.309: 0.0871% ( 2) 00:43:36.646 858.210 - 862.110: 0.0884% ( 1) 00:43:36.646 862.110 - 866.011: 0.0935% ( 4) 00:43:36.646 866.011 - 869.912: 0.0986% ( 4) 00:43:36.646 869.912 - 873.813: 0.1050% ( 5) 00:43:36.646 873.813 - 877.714: 0.1089% ( 3) 00:43:36.646 877.714 - 881.615: 0.1127% ( 3) 00:43:36.646 881.615 - 885.516: 0.1191% ( 5) 00:43:36.646 885.516 - 889.417: 0.1217% ( 2) 00:43:36.646 889.417 - 893.318: 0.1255% ( 3) 00:43:36.646 893.318 - 897.219: 0.1281% ( 2) 00:43:36.646 897.219 - 901.120: 0.1371% ( 7) 00:43:36.646 901.120 - 905.021: 0.1396% ( 2) 00:43:36.646 905.021 - 908.922: 0.1448% ( 4) 00:43:36.646 908.922 - 912.823: 0.1499% ( 4) 00:43:36.646 912.823 - 916.724: 0.1550% ( 4) 00:43:36.646 916.724 - 920.625: 0.1588% ( 3) 00:43:36.646 920.625 - 924.526: 0.1665% ( 6) 00:43:36.646 924.526 - 928.427: 0.1755% ( 7) 00:43:36.646 928.427 - 932.328: 0.1845% ( 7) 00:43:36.646 932.328 - 936.229: 0.1934% ( 7) 00:43:36.646 936.229 - 940.130: 0.1998% ( 5) 00:43:36.646 940.130 - 944.030: 0.2152% ( 12) 00:43:36.646 944.030 - 947.931: 0.2255% ( 8) 00:43:36.646 947.931 - 951.832: 0.2421% ( 13) 00:43:36.646 951.832 - 955.733: 0.2460% ( 3) 00:43:36.646 955.733 - 959.634: 0.2588% ( 10) 00:43:36.646 959.634 - 963.535: 0.2741% ( 12) 00:43:36.646 963.535 - 967.436: 0.2946% ( 16) 00:43:36.646 967.436 - 971.337: 0.3164% ( 17) 00:43:36.646 971.337 - 975.238: 0.3356% ( 15) 00:43:36.646 975.238 - 979.139: 0.3587% ( 18) 00:43:36.646 979.139 - 983.040: 0.3792% ( 16) 00:43:36.646 983.040 - 986.941: 0.3984% ( 15) 00:43:36.646 986.941 - 990.842: 0.4279% ( 23) 00:43:36.646 990.842 - 994.743: 0.4509% ( 18) 00:43:36.646 994.743 - 998.644: 0.4753% ( 19) 00:43:36.646 998.644 - 1006.446: 0.5329% ( 45) 00:43:36.646 1006.446 - 1014.248: 0.5854% ( 41) 00:43:36.646 1014.248 - 1022.050: 0.6431% ( 45) 00:43:36.646 1022.050 - 1029.851: 0.7020% ( 46) 00:43:36.646 1029.851 - 1037.653: 0.7712% ( 54) 00:43:36.646 1037.653 - 1045.455: 0.8493% ( 61) 00:43:36.646 1045.455 - 1053.257: 0.9172% ( 53) 00:43:36.646 1053.257 - 1061.059: 1.0069% ( 70) 00:43:36.646 1061.059 - 1068.861: 1.0889% ( 64) 00:43:36.646 1068.861 - 1076.663: 1.1773% ( 69) 00:43:36.646 1076.663 - 1084.465: 1.2772% ( 78) 00:43:36.646 1084.465 - 1092.267: 1.3874% ( 86) 00:43:36.646 1092.267 - 1100.069: 1.5270% ( 109) 00:43:36.646 1100.069 - 1107.870: 1.6525% ( 98) 00:43:36.646 1107.870 - 1115.672: 1.8063% ( 120) 00:43:36.646 1115.672 - 1123.474: 1.9920% ( 145) 00:43:36.646 1123.474 - 1131.276: 2.1649% ( 135) 00:43:36.646 1131.276 - 1139.078: 2.3135% ( 116) 00:43:36.646 1139.078 - 1146.880: 2.5211% ( 162) 00:43:36.646 1146.880 - 1154.682: 2.7504% ( 179) 00:43:36.646 1154.682 - 1162.484: 2.9617% ( 165) 00:43:36.646 1162.484 - 1170.286: 3.1821% ( 172) 00:43:36.646 1170.286 - 1178.088: 3.4614% ( 218) 00:43:36.646 1178.088 - 1185.890: 3.7163% ( 199) 00:43:36.646 1185.890 - 1193.691: 4.0032% ( 224) 00:43:36.646 1193.691 - 1201.493: 4.3068% ( 237) 00:43:36.646 1201.493 - 1209.295: 4.6642% ( 279) 00:43:36.646 1209.295 - 1217.097: 4.9909% ( 255) 00:43:36.646 1217.097 - 1224.899: 5.3124% ( 251) 00:43:36.646 1224.899 - 1232.701: 5.6993% ( 302) 00:43:36.646 1232.701 - 1240.503: 6.0990% ( 312) 00:43:36.646 1240.503 - 1248.305: 6.5269% ( 334) 00:43:36.646 1248.305 - 1256.107: 6.9650% ( 342) 00:43:36.646 1256.107 - 1263.909: 7.4159% ( 352) 00:43:36.646 1263.909 - 1271.710: 7.9091% ( 385) 00:43:36.646 1271.710 - 1279.512: 8.5496% ( 500) 00:43:36.646 1279.512 - 1287.314: 9.1812% ( 493) 00:43:36.646 1287.314 - 1295.116: 9.8716% ( 539) 00:43:36.646 1295.116 - 1302.918: 10.5378% ( 520) 00:43:36.646 1302.918 - 1310.720: 11.2462% ( 553) 00:43:36.646 1310.720 - 1318.522: 11.9982% ( 587) 00:43:36.646 1318.522 - 1326.324: 12.8449% ( 661) 00:43:36.646 1326.324 - 1334.126: 13.7288% ( 690) 00:43:36.646 1334.126 - 1341.928: 14.6499% ( 719) 00:43:36.646 1341.928 - 1349.730: 15.6030% ( 744) 00:43:36.646 1349.730 - 1357.531: 16.6816% ( 842) 00:43:36.646 1357.531 - 1365.333: 17.7128% ( 805) 00:43:36.646 1365.333 - 1373.135: 18.8632% ( 898) 00:43:36.646 1373.135 - 1380.937: 20.0879% ( 956) 00:43:36.646 1380.937 - 1388.739: 21.3049% ( 950) 00:43:36.646 1388.739 - 1396.541: 22.5718% ( 989) 00:43:36.646 1396.541 - 1404.343: 23.8554% ( 1002) 00:43:36.646 1404.343 - 1412.145: 25.0711% ( 949) 00:43:36.646 1412.145 - 1419.947: 26.4213% ( 1054) 00:43:36.646 1419.947 - 1427.749: 27.8842% ( 1142) 00:43:36.646 1427.749 - 1435.550: 29.3715% ( 1161) 00:43:36.646 1435.550 - 1443.352: 30.8588% ( 1161) 00:43:36.646 1443.352 - 1451.154: 32.4755% ( 1262) 00:43:36.646 1451.154 - 1458.956: 34.2535% ( 1388) 00:43:36.646 1458.956 - 1466.758: 35.8792% ( 1269) 00:43:36.646 1466.758 - 1474.560: 37.4664% ( 1239) 00:43:36.646 1474.560 - 1482.362: 38.9011% ( 1120) 00:43:36.646 1482.362 - 1490.164: 40.2795% ( 1076) 00:43:36.646 1490.164 - 1497.966: 41.7668% ( 1161) 00:43:36.646 1497.966 - 1505.768: 43.2938% ( 1192) 00:43:36.646 1505.768 - 1513.570: 44.8567% ( 1220) 00:43:36.646 1513.570 - 1521.371: 46.2735% ( 1106) 00:43:36.646 1521.371 - 1529.173: 47.7748% ( 1172) 00:43:36.646 1529.173 - 1536.975: 49.2096% ( 1120) 00:43:36.646 1536.975 - 1544.777: 50.7276% ( 1185) 00:43:36.646 1544.777 - 1552.579: 52.1432% ( 1105) 00:43:36.646 1552.579 - 1560.381: 53.5100% ( 1067) 00:43:36.646 1560.381 - 1568.183: 54.6706% ( 906) 00:43:36.646 1568.183 - 1575.985: 55.8953% ( 956) 00:43:36.646 1575.985 - 1583.787: 57.1123% ( 950) 00:43:36.646 1583.787 - 1591.589: 58.1794% ( 833) 00:43:36.646 1591.589 - 1599.390: 59.4643% ( 1003) 00:43:36.646 1599.390 - 1607.192: 60.4955% ( 805) 00:43:36.646 1607.192 - 1614.994: 61.5011% ( 785) 00:43:36.646 1614.994 - 1622.796: 62.5375% ( 809) 00:43:36.646 1622.796 - 1630.598: 63.5021% ( 753) 00:43:36.646 1630.598 - 1638.400: 64.4923% ( 773) 00:43:36.646 1638.400 - 1646.202: 65.4787% ( 770) 00:43:36.646 1646.202 - 1654.004: 66.4228% ( 737) 00:43:36.646 1654.004 - 1661.806: 67.2222% ( 624) 00:43:36.646 1661.806 - 1669.608: 68.0664% ( 659) 00:43:36.646 1669.608 - 1677.410: 68.9221% ( 668) 00:43:36.646 1677.410 - 1685.211: 69.7151% ( 619) 00:43:36.646 1685.211 - 1693.013: 70.5862% ( 680) 00:43:36.646 1693.013 - 1700.815: 71.3228% ( 575) 00:43:36.646 1700.815 - 1708.617: 72.0786% ( 590) 00:43:36.646 1708.617 - 1716.419: 72.7998% ( 563) 00:43:36.646 1716.419 - 1724.221: 73.5415% ( 579) 00:43:36.646 1724.221 - 1732.023: 74.1846% ( 502) 00:43:36.646 1732.023 - 1739.825: 74.7995% ( 480) 00:43:36.646 1739.825 - 1747.627: 75.3914% ( 462) 00:43:36.646 1747.627 - 1755.429: 75.9589% ( 443) 00:43:36.646 1755.429 - 1763.230: 76.5366% ( 451) 00:43:36.646 1763.230 - 1771.032: 77.0874% ( 430) 00:43:36.646 1771.032 - 1778.834: 77.6524% ( 441) 00:43:36.646 1778.834 - 1786.636: 78.2109% ( 436) 00:43:36.646 1786.636 - 1794.438: 78.6951% ( 378) 00:43:36.646 1794.438 - 1802.240: 79.2396% ( 425) 00:43:36.646 1802.240 - 1810.042: 79.7827% ( 424) 00:43:36.646 1810.042 - 1817.844: 80.2798% ( 388) 00:43:36.646 1817.844 - 1825.646: 80.7474% ( 365) 00:43:36.646 1825.646 - 1833.448: 81.2418% ( 386) 00:43:36.646 1833.448 - 1841.250: 81.7222% ( 375) 00:43:36.646 1841.250 - 1849.051: 82.1706% ( 350) 00:43:36.646 1849.051 - 1856.853: 82.6087% ( 342) 00:43:36.646 1856.853 - 1864.655: 83.0699% ( 360) 00:43:36.646 1864.655 - 1872.457: 83.5554% ( 379) 00:43:36.646 1872.457 - 1880.259: 83.9935% ( 342) 00:43:36.646 1880.259 - 1888.061: 84.4278% ( 339) 00:43:36.646 1888.061 - 1895.863: 84.8262% ( 311) 00:43:36.646 1895.863 - 1903.665: 85.2527% ( 333) 00:43:36.646 1903.665 - 1911.467: 85.6371% ( 300) 00:43:36.647 1911.467 - 1919.269: 86.0009% ( 284) 00:43:36.647 1919.269 - 1927.070: 86.3775% ( 294) 00:43:36.647 1927.070 - 1934.872: 86.7490% ( 290) 00:43:36.647 1934.872 - 1942.674: 87.0872% ( 264) 00:43:36.647 1942.674 - 1950.476: 87.4331% ( 270) 00:43:36.647 1950.476 - 1958.278: 87.7277% ( 230) 00:43:36.647 1958.278 - 1966.080: 88.0223% ( 230) 00:43:36.647 1966.080 - 1973.882: 88.3144% ( 228) 00:43:36.647 1973.882 - 1981.684: 88.6065% ( 228) 00:43:36.647 1981.684 - 1989.486: 88.8909% ( 222) 00:43:36.647 1989.486 - 1997.288: 89.1650% ( 214) 00:43:36.647 1997.288 - 2012.891: 89.6902% ( 410) 00:43:36.647 2012.891 - 2028.495: 90.1873% ( 388) 00:43:36.647 2028.495 - 2044.099: 90.6805% ( 385) 00:43:36.647 2044.099 - 2059.703: 91.1058% ( 332) 00:43:36.647 2059.703 - 2075.307: 91.4875% ( 298) 00:43:36.647 2075.307 - 2090.910: 91.8642% ( 294) 00:43:36.647 2090.910 - 2106.514: 92.1831% ( 249) 00:43:36.647 2106.514 - 2122.118: 92.5085% ( 254) 00:43:36.647 2122.118 - 2137.722: 92.7852% ( 216) 00:43:36.647 2137.722 - 2153.326: 93.0568% ( 212) 00:43:36.647 2153.326 - 2168.930: 93.2899% ( 182) 00:43:36.647 2168.930 - 2184.533: 93.5103% ( 172) 00:43:36.647 2184.533 - 2200.137: 93.7229% ( 166) 00:43:36.647 2200.137 - 2215.741: 93.9164% ( 151) 00:43:36.647 2215.741 - 2231.345: 94.0932% ( 138) 00:43:36.647 2231.345 - 2246.949: 94.2635% ( 133) 00:43:36.647 2246.949 - 2262.552: 94.4185% ( 121) 00:43:36.647 2262.552 - 2278.156: 94.5556% ( 107) 00:43:36.647 2278.156 - 2293.760: 94.6824% ( 99) 00:43:36.647 2293.760 - 2309.364: 94.8093% ( 99) 00:43:36.647 2309.364 - 2324.968: 94.9194% ( 86) 00:43:36.647 2324.968 - 2340.571: 95.0398% ( 94) 00:43:36.647 2340.571 - 2356.175: 95.1398% ( 78) 00:43:36.647 2356.175 - 2371.779: 95.2371% ( 76) 00:43:36.647 2371.779 - 2387.383: 95.3332% ( 75) 00:43:36.647 2387.383 - 2402.987: 95.4293% ( 75) 00:43:36.647 2402.987 - 2418.590: 95.5036% ( 58) 00:43:36.647 2418.590 - 2434.194: 95.5894% ( 67) 00:43:36.647 2434.194 - 2449.798: 95.6637% ( 58) 00:43:36.647 2449.798 - 2465.402: 95.7329% ( 54) 00:43:36.647 2465.402 - 2481.006: 95.8021% ( 54) 00:43:36.647 2481.006 - 2496.610: 95.8674% ( 51) 00:43:36.647 2496.610 - 2512.213: 95.9327% ( 51) 00:43:36.647 2512.213 - 2527.817: 96.0006% ( 53) 00:43:36.647 2527.817 - 2543.421: 96.0570% ( 44) 00:43:36.647 2543.421 - 2559.025: 96.1236% ( 52) 00:43:36.647 2559.025 - 2574.629: 96.1761% ( 41) 00:43:36.647 2574.629 - 2590.232: 96.2414% ( 51) 00:43:36.647 2590.232 - 2605.836: 96.2978% ( 44) 00:43:36.647 2605.836 - 2621.440: 96.3478% ( 39) 00:43:36.647 2621.440 - 2637.044: 96.4003% ( 41) 00:43:36.647 2637.044 - 2652.648: 96.4541% ( 42) 00:43:36.647 2652.648 - 2668.251: 96.5092% ( 43) 00:43:36.647 2668.251 - 2683.855: 96.5707% ( 48) 00:43:36.647 2683.855 - 2699.459: 96.6322% ( 48) 00:43:36.647 2699.459 - 2715.063: 96.6872% ( 43) 00:43:36.647 2715.063 - 2730.667: 96.7462% ( 46) 00:43:36.647 2730.667 - 2746.270: 96.7961% ( 39) 00:43:36.647 2746.270 - 2761.874: 96.8461% ( 39) 00:43:36.647 2761.874 - 2777.478: 96.9037% ( 45) 00:43:36.647 2777.478 - 2793.082: 96.9588% ( 43) 00:43:36.647 2793.082 - 2808.686: 97.0178% ( 46) 00:43:36.647 2808.686 - 2824.290: 97.0754% ( 45) 00:43:36.647 2824.290 - 2839.893: 97.1395% ( 50) 00:43:36.647 2839.893 - 2855.497: 97.2009% ( 48) 00:43:36.647 2855.497 - 2871.101: 97.2676% ( 52) 00:43:36.647 2871.101 - 2886.705: 97.3239% ( 44) 00:43:36.647 2886.705 - 2902.309: 97.3828% ( 46) 00:43:36.647 2902.309 - 2917.912: 97.4302% ( 37) 00:43:36.647 2917.912 - 2933.516: 97.4917% ( 48) 00:43:36.647 2933.516 - 2949.120: 97.5481% ( 44) 00:43:36.647 2949.120 - 2964.724: 97.5993% ( 40) 00:43:36.647 2964.724 - 2980.328: 97.6570% ( 45) 00:43:36.647 2980.328 - 2995.931: 97.7070% ( 39) 00:43:36.647 2995.931 - 3011.535: 97.7569% ( 39) 00:43:36.647 3011.535 - 3027.139: 97.8082% ( 40) 00:43:36.647 3027.139 - 3042.743: 97.8607% ( 41) 00:43:36.647 3042.743 - 3058.347: 97.9094% ( 38) 00:43:36.647 3058.347 - 3073.950: 97.9606% ( 40) 00:43:36.647 3073.950 - 3089.554: 98.0029% ( 33) 00:43:36.647 3089.554 - 3105.158: 98.0490% ( 36) 00:43:36.647 3105.158 - 3120.762: 98.1015% ( 41) 00:43:36.647 3120.762 - 3136.366: 98.1553% ( 42) 00:43:36.647 3136.366 - 3151.970: 98.2117% ( 44) 00:43:36.647 3151.970 - 3167.573: 98.2693% ( 45) 00:43:36.647 3167.573 - 3183.177: 98.3129% ( 34) 00:43:36.647 3183.177 - 3198.781: 98.3564% ( 34) 00:43:36.647 3198.781 - 3214.385: 98.4064% ( 39) 00:43:36.647 3214.385 - 3229.989: 98.4525% ( 36) 00:43:36.647 3229.989 - 3245.592: 98.4961% ( 34) 00:43:36.647 3245.592 - 3261.196: 98.5383% ( 33) 00:43:36.647 3261.196 - 3276.800: 98.5742% ( 28) 00:43:36.647 3276.800 - 3292.404: 98.6152% ( 32) 00:43:36.647 3292.404 - 3308.008: 98.6562% ( 32) 00:43:36.647 3308.008 - 3323.611: 98.6985% ( 33) 00:43:36.647 3323.611 - 3339.215: 98.7356% ( 29) 00:43:36.647 3339.215 - 3354.819: 98.7753% ( 31) 00:43:36.647 3354.819 - 3370.423: 98.8074% ( 25) 00:43:36.647 3370.423 - 3386.027: 98.8419% ( 27) 00:43:36.647 3386.027 - 3401.630: 98.8714% ( 23) 00:43:36.647 3401.630 - 3417.234: 98.9022% ( 24) 00:43:36.647 3417.234 - 3432.838: 98.9367% ( 27) 00:43:36.647 3432.838 - 3448.442: 98.9688% ( 25) 00:43:36.647 3448.442 - 3464.046: 99.0008% ( 25) 00:43:36.647 3464.046 - 3479.650: 99.0328% ( 25) 00:43:36.647 3479.650 - 3495.253: 99.0623% ( 23) 00:43:36.647 3495.253 - 3510.857: 99.0892% ( 21) 00:43:36.647 3510.857 - 3526.461: 99.1161% ( 21) 00:43:36.647 3526.461 - 3542.065: 99.1456% ( 23) 00:43:36.647 3542.065 - 3557.669: 99.1699% ( 19) 00:43:36.647 3557.669 - 3573.272: 99.1942% ( 19) 00:43:36.647 3573.272 - 3588.876: 99.2160% ( 17) 00:43:36.647 3588.876 - 3604.480: 99.2327% ( 13) 00:43:36.647 3604.480 - 3620.084: 99.2544% ( 17) 00:43:36.647 3620.084 - 3635.688: 99.2749% ( 16) 00:43:36.647 3635.688 - 3651.291: 99.2967% ( 17) 00:43:36.647 3651.291 - 3666.895: 99.3172% ( 16) 00:43:36.647 3666.895 - 3682.499: 99.3377% ( 16) 00:43:36.647 3682.499 - 3698.103: 99.3518% ( 11) 00:43:36.647 3698.103 - 3713.707: 99.3685% ( 13) 00:43:36.647 3713.707 - 3729.310: 99.3851% ( 13) 00:43:36.647 3729.310 - 3744.914: 99.3992% ( 11) 00:43:36.647 3744.914 - 3760.518: 99.4158% ( 13) 00:43:36.647 3760.518 - 3776.122: 99.4325% ( 13) 00:43:36.647 3776.122 - 3791.726: 99.4479% ( 12) 00:43:36.647 3791.726 - 3807.330: 99.4645% ( 13) 00:43:36.647 3807.330 - 3822.933: 99.4799% ( 12) 00:43:36.647 3822.933 - 3838.537: 99.4940% ( 11) 00:43:36.647 3838.537 - 3854.141: 99.5081% ( 11) 00:43:36.647 3854.141 - 3869.745: 99.5209% ( 10) 00:43:36.647 3869.745 - 3885.349: 99.5324% ( 9) 00:43:36.647 3885.349 - 3900.952: 99.5593% ( 21) 00:43:36.647 3900.952 - 3916.556: 99.5734% ( 11) 00:43:36.647 3916.556 - 3932.160: 99.5888% ( 12) 00:43:36.647 3932.160 - 3947.764: 99.6042% ( 12) 00:43:36.647 3947.764 - 3963.368: 99.6157% ( 9) 00:43:36.647 3963.368 - 3978.971: 99.6272% ( 9) 00:43:36.647 3978.971 - 3994.575: 99.6375% ( 8) 00:43:36.647 3994.575 - 4025.783: 99.6618% ( 19) 00:43:36.647 4025.783 - 4056.990: 99.6810% ( 15) 00:43:36.647 4056.990 - 4088.198: 99.6990% ( 14) 00:43:36.647 4088.198 - 4119.406: 99.7169% ( 14) 00:43:36.647 4119.406 - 4150.613: 99.7335% ( 13) 00:43:36.647 4150.613 - 4181.821: 99.7476% ( 11) 00:43:36.647 4181.821 - 4213.029: 99.7617% ( 11) 00:43:36.647 4213.029 - 4244.236: 99.7784% ( 13) 00:43:36.647 4244.236 - 4275.444: 99.7963% ( 14) 00:43:36.647 4275.444 - 4306.651: 99.8104% ( 11) 00:43:36.647 4306.651 - 4337.859: 99.8207% ( 8) 00:43:36.647 4337.859 - 4369.067: 99.8309% ( 8) 00:43:36.647 4369.067 - 4400.274: 99.8386% ( 6) 00:43:36.647 4400.274 - 4431.482: 99.8463% ( 6) 00:43:36.647 4431.482 - 4462.690: 99.8527% ( 5) 00:43:36.647 4462.690 - 4493.897: 99.8604% ( 6) 00:43:36.647 4493.897 - 4525.105: 99.8629% ( 2) 00:43:36.647 4525.105 - 4556.312: 99.8642% ( 1) 00:43:36.647 4556.312 - 4587.520: 99.8655% ( 1) 00:43:36.647 4587.520 - 4618.728: 99.8668% ( 1) 00:43:36.647 4618.728 - 4649.935: 99.8693% ( 2) 00:43:36.647 4649.935 - 4681.143: 99.8706% ( 1) 00:43:36.647 4681.143 - 4712.350: 99.8732% ( 2) 00:43:36.647 4712.350 - 4743.558: 99.8745% ( 1) 00:43:36.647 4743.558 - 4774.766: 99.8757% ( 1) 00:43:36.647 4774.766 - 4805.973: 99.8770% ( 1) 00:43:36.647 4805.973 - 4837.181: 99.8783% ( 1) 00:43:36.647 4837.181 - 4868.389: 99.8796% ( 1) 00:43:36.647 4868.389 - 4899.596: 99.8809% ( 1) 00:43:36.647 4899.596 - 4930.804: 99.8834% ( 2) 00:43:36.647 4930.804 - 4962.011: 99.8847% ( 1) 00:43:36.647 4962.011 - 4993.219: 99.8860% ( 1) 00:43:36.647 5024.427 - 5055.634: 99.8886% ( 2) 00:43:36.647 5055.634 - 5086.842: 99.8898% ( 1) 00:43:36.647 5086.842 - 5118.050: 99.8911% ( 1) 00:43:36.647 5118.050 - 5149.257: 99.8924% ( 1) 00:43:36.647 5149.257 - 5180.465: 99.8937% ( 1) 00:43:36.647 5180.465 - 5211.672: 99.8950% ( 1) 00:43:36.647 5211.672 - 5242.880: 99.8962% ( 1) 00:43:36.647 5242.880 - 5274.088: 99.8988% ( 2) 00:43:36.647 5274.088 - 5305.295: 99.9001% ( 1) 00:43:36.647 5305.295 - 5336.503: 99.9014% ( 1) 00:43:36.647 5336.503 - 5367.710: 99.9026% ( 1) 00:43:36.648 5367.710 - 5398.918: 99.9039% ( 1) 00:43:36.648 5398.918 - 5430.126: 99.9065% ( 2) 00:43:36.648 5430.126 - 5461.333: 99.9078% ( 1) 00:43:36.648 5461.333 - 5492.541: 99.9090% ( 1) 00:43:36.648 5492.541 - 5523.749: 99.9103% ( 1) 00:43:36.648 5523.749 - 5554.956: 99.9129% ( 2) 00:43:36.648 5554.956 - 5586.164: 99.9142% ( 1) 00:43:36.648 5586.164 - 5617.371: 99.9155% ( 1) 00:43:36.648 5617.371 - 5648.579: 99.9193% ( 3) 00:43:36.648 5648.579 - 5679.787: 99.9206% ( 1) 00:43:36.648 5679.787 - 5710.994: 99.9219% ( 1) 00:43:36.648 5710.994 - 5742.202: 99.9244% ( 2) 00:43:36.648 5742.202 - 5773.410: 99.9257% ( 1) 00:43:36.648 5773.410 - 5804.617: 99.9270% ( 1) 00:43:36.648 5804.617 - 5835.825: 99.9283% ( 1) 00:43:36.648 5835.825 - 5867.032: 99.9321% ( 3) 00:43:36.648 5867.032 - 5898.240: 99.9347% ( 2) 00:43:36.648 5898.240 - 5929.448: 99.9398% ( 4) 00:43:36.648 5929.448 - 5960.655: 99.9411% ( 1) 00:43:36.648 5960.655 - 5991.863: 99.9424% ( 1) 00:43:36.648 5991.863 - 6023.070: 99.9436% ( 1) 00:43:36.648 6054.278 - 6085.486: 99.9475% ( 3) 00:43:36.648 6085.486 - 6116.693: 99.9488% ( 1) 00:43:36.648 6116.693 - 6147.901: 99.9500% ( 1) 00:43:36.648 6147.901 - 6179.109: 99.9526% ( 2) 00:43:36.648 6179.109 - 6210.316: 99.9539% ( 1) 00:43:36.648 6210.316 - 6241.524: 99.9577% ( 3) 00:43:36.648 6241.524 - 6272.731: 99.9616% ( 3) 00:43:36.648 6272.731 - 6303.939: 99.9629% ( 1) 00:43:36.648 6303.939 - 6335.147: 99.9641% ( 1) 00:43:36.648 6335.147 - 6366.354: 99.9667% ( 2) 00:43:36.648 6397.562 - 6428.770: 99.9693% ( 2) 00:43:36.648 6428.770 - 6459.977: 99.9705% ( 1) 00:43:36.648 6459.977 - 6491.185: 99.9731% ( 2) 00:43:36.648 6491.185 - 6522.392: 99.9744% ( 1) 00:43:36.648 6522.392 - 6553.600: 99.9769% ( 2) 00:43:36.648 6553.600 - 6584.808: 99.9782% ( 1) 00:43:36.648 6584.808 - 6616.015: 99.9808% ( 2) 00:43:36.648 6616.015 - 6647.223: 99.9821% ( 1) 00:43:36.648 6647.223 - 6678.430: 99.9833% ( 1) 00:43:36.648 6678.430 - 6709.638: 99.9846% ( 1) 00:43:36.648 6709.638 - 6740.846: 99.9859% ( 1) 00:43:36.648 6740.846 - 6772.053: 99.9885% ( 2) 00:43:36.648 6772.053 - 6803.261: 99.9898% ( 1) 00:43:36.648 6803.261 - 6834.469: 99.9910% ( 1) 00:43:36.648 6834.469 - 6865.676: 99.9923% ( 1) 00:43:36.648 7177.752 - 7208.960: 99.9936% ( 1) 00:43:36.648 8176.396 - 8238.811: 99.9949% ( 1) 00:43:36.648 8426.057 - 8488.472: 99.9987% ( 3) 00:43:36.648 9924.023 - 9986.438: 100.0000% ( 1) 00:43:36.648 00:43:36.648 19:10:37 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:43:36.648 00:43:36.648 real 0m2.733s 00:43:36.648 user 0m2.262s 00:43:36.648 sys 0m0.337s 00:43:36.648 19:10:37 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:36.648 19:10:37 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:43:36.648 ************************************ 00:43:36.648 END TEST nvme_perf 00:43:36.648 ************************************ 00:43:36.648 19:10:37 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:43:36.648 19:10:37 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:36.648 19:10:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:36.648 19:10:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:36.648 ************************************ 00:43:36.648 START TEST nvme_hello_world 00:43:36.648 ************************************ 00:43:36.648 19:10:37 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:43:36.907 Initializing NVMe Controllers 00:43:36.907 Attached to 0000:00:10.0 00:43:36.907 Namespace ID: 1 size: 5GB 00:43:36.907 Initialization complete. 00:43:36.907 INFO: using host memory buffer for IO 00:43:36.907 Hello world! 00:43:36.907 00:43:36.907 real 0m0.340s 00:43:36.907 user 0m0.135s 00:43:36.907 sys 0m0.140s 00:43:36.907 19:10:37 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:36.907 19:10:37 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:36.907 ************************************ 00:43:36.907 END TEST nvme_hello_world 00:43:36.907 ************************************ 00:43:37.166 19:10:37 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:43:37.166 19:10:37 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:37.166 19:10:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:37.166 19:10:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:37.166 ************************************ 00:43:37.166 START TEST nvme_sgl 00:43:37.166 ************************************ 00:43:37.166 19:10:37 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:43:37.425 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:43:37.425 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:43:37.425 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:43:37.425 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:43:37.425 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:43:37.425 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:43:37.425 NVMe Readv/Writev Request test 00:43:37.425 Attached to 0000:00:10.0 00:43:37.425 0000:00:10.0: build_io_request_2 test passed 00:43:37.425 0000:00:10.0: build_io_request_4 test passed 00:43:37.425 0000:00:10.0: build_io_request_5 test passed 00:43:37.425 0000:00:10.0: build_io_request_6 test passed 00:43:37.425 0000:00:10.0: build_io_request_7 test passed 00:43:37.425 0000:00:10.0: build_io_request_10 test passed 00:43:37.425 Cleaning up... 00:43:37.425 00:43:37.425 real 0m0.376s 00:43:37.425 user 0m0.135s 00:43:37.425 sys 0m0.175s 00:43:37.425 19:10:37 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:37.425 19:10:37 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:43:37.425 ************************************ 00:43:37.425 END TEST nvme_sgl 00:43:37.425 ************************************ 00:43:37.425 19:10:37 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:43:37.425 19:10:37 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:37.425 19:10:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:37.425 19:10:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:37.425 ************************************ 00:43:37.425 START TEST nvme_e2edp 00:43:37.425 ************************************ 00:43:37.425 19:10:37 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:43:37.990 NVMe Write/Read with End-to-End data protection test 00:43:37.990 Attached to 0000:00:10.0 00:43:37.990 Cleaning up... 00:43:37.990 00:43:37.990 real 0m0.378s 00:43:37.990 user 0m0.099s 00:43:37.990 sys 0m0.179s 00:43:37.990 19:10:38 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:37.990 19:10:38 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:43:37.990 ************************************ 00:43:37.990 END TEST nvme_e2edp 00:43:37.990 ************************************ 00:43:37.990 19:10:38 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:43:37.990 19:10:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:37.990 19:10:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:37.990 19:10:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:37.990 ************************************ 00:43:37.990 START TEST nvme_reserve 00:43:37.990 ************************************ 00:43:37.990 19:10:38 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:43:38.248 ===================================================== 00:43:38.248 NVMe Controller at PCI bus 0, device 16, function 0 00:43:38.248 ===================================================== 00:43:38.248 Reservations: Not Supported 00:43:38.248 Reservation test passed 00:43:38.248 00:43:38.248 real 0m0.301s 00:43:38.248 user 0m0.079s 00:43:38.248 sys 0m0.146s 00:43:38.248 19:10:38 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:38.248 19:10:38 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:43:38.248 ************************************ 00:43:38.248 END TEST nvme_reserve 00:43:38.249 ************************************ 00:43:38.249 19:10:38 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:43:38.249 19:10:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:38.249 19:10:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:38.249 19:10:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:38.249 ************************************ 00:43:38.249 START TEST nvme_err_injection 00:43:38.249 ************************************ 00:43:38.249 19:10:38 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:43:38.816 NVMe Error Injection test 00:43:38.816 Attached to 0000:00:10.0 00:43:38.816 0000:00:10.0: get features failed as expected 00:43:38.816 0000:00:10.0: get features successfully as expected 00:43:38.816 0000:00:10.0: read failed as expected 00:43:38.816 0000:00:10.0: read successfully as expected 00:43:38.816 Cleaning up... 00:43:38.816 00:43:38.816 real 0m0.376s 00:43:38.816 user 0m0.106s 00:43:38.816 sys 0m0.189s 00:43:38.816 19:10:39 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:38.816 19:10:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:43:38.816 ************************************ 00:43:38.816 END TEST nvme_err_injection 00:43:38.816 ************************************ 00:43:38.816 19:10:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:43:38.816 19:10:39 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:43:38.816 19:10:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:38.816 19:10:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:38.816 ************************************ 00:43:38.816 START TEST nvme_overhead 00:43:38.816 ************************************ 00:43:38.816 19:10:39 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:43:40.194 Initializing NVMe Controllers 00:43:40.194 Attached to 0000:00:10.0 00:43:40.194 Initialization complete. Launching workers. 00:43:40.194 submit (in ns) avg, min, max = 12801.7, 11572.4, 110176.2 00:43:40.194 complete (in ns) avg, min, max = 8773.7, 7746.7, 570218.1 00:43:40.194 00:43:40.194 Submit histogram 00:43:40.194 ================ 00:43:40.194 Range in us Cumulative Count 00:43:40.194 11.520 - 11.581: 0.0126% ( 1) 00:43:40.194 11.581 - 11.642: 0.1263% ( 9) 00:43:40.194 11.642 - 11.703: 0.3159% ( 15) 00:43:40.194 11.703 - 11.764: 0.7202% ( 32) 00:43:40.194 11.764 - 11.825: 1.2634% ( 43) 00:43:40.194 11.825 - 11.886: 1.7056% ( 35) 00:43:40.194 11.886 - 11.947: 2.1226% ( 33) 00:43:40.194 11.947 - 12.008: 2.5648% ( 35) 00:43:40.194 12.008 - 12.069: 3.3354% ( 61) 00:43:40.194 12.069 - 12.130: 4.7505% ( 112) 00:43:40.194 12.130 - 12.190: 7.3279% ( 204) 00:43:40.194 12.190 - 12.251: 10.9286% ( 285) 00:43:40.194 12.251 - 12.312: 15.3759% ( 352) 00:43:40.194 12.312 - 12.373: 21.1118% ( 454) 00:43:40.194 12.373 - 12.434: 29.1219% ( 634) 00:43:40.194 12.434 - 12.495: 38.8124% ( 767) 00:43:40.194 12.495 - 12.556: 49.3114% ( 831) 00:43:40.194 12.556 - 12.617: 58.7366% ( 746) 00:43:40.194 12.617 - 12.678: 67.7701% ( 715) 00:43:40.194 12.678 - 12.739: 75.4264% ( 606) 00:43:40.194 12.739 - 12.800: 81.6677% ( 494) 00:43:40.194 12.800 - 12.861: 86.4435% ( 378) 00:43:40.194 12.861 - 12.922: 90.0569% ( 286) 00:43:40.194 12.922 - 12.983: 92.5332% ( 196) 00:43:40.194 12.983 - 13.044: 94.3272% ( 142) 00:43:40.194 13.044 - 13.105: 95.5148% ( 94) 00:43:40.194 13.105 - 13.166: 96.2982% ( 62) 00:43:40.194 13.166 - 13.227: 96.8288% ( 42) 00:43:40.194 13.227 - 13.288: 97.1826% ( 28) 00:43:40.194 13.288 - 13.349: 97.3215% ( 11) 00:43:40.194 13.349 - 13.410: 97.4226% ( 8) 00:43:40.194 13.410 - 13.470: 97.4732% ( 4) 00:43:40.194 13.470 - 13.531: 97.5237% ( 4) 00:43:40.194 13.531 - 13.592: 97.5995% ( 6) 00:43:40.194 13.592 - 13.653: 97.6121% ( 1) 00:43:40.194 13.653 - 13.714: 97.6248% ( 1) 00:43:40.194 13.714 - 13.775: 97.6374% ( 1) 00:43:40.194 13.775 - 13.836: 97.6500% ( 1) 00:43:40.194 13.958 - 14.019: 97.7258% ( 6) 00:43:40.194 14.019 - 14.080: 97.7385% ( 1) 00:43:40.194 14.202 - 14.263: 97.7637% ( 2) 00:43:40.194 14.385 - 14.446: 97.7890% ( 2) 00:43:40.194 14.446 - 14.507: 97.8016% ( 1) 00:43:40.194 14.629 - 14.690: 97.8143% ( 1) 00:43:40.194 14.750 - 14.811: 97.8522% ( 3) 00:43:40.194 14.811 - 14.872: 97.8774% ( 2) 00:43:40.194 14.872 - 14.933: 97.9027% ( 2) 00:43:40.194 15.055 - 15.116: 97.9154% ( 1) 00:43:40.194 15.116 - 15.177: 97.9406% ( 2) 00:43:40.194 15.177 - 15.238: 97.9659% ( 2) 00:43:40.194 15.299 - 15.360: 97.9912% ( 2) 00:43:40.194 15.421 - 15.482: 98.0164% ( 2) 00:43:40.194 15.482 - 15.543: 98.0291% ( 1) 00:43:40.194 15.543 - 15.604: 98.0417% ( 1) 00:43:40.194 15.604 - 15.726: 98.0543% ( 1) 00:43:40.194 15.726 - 15.848: 98.0670% ( 1) 00:43:40.194 15.848 - 15.970: 98.0796% ( 1) 00:43:40.194 15.970 - 16.091: 98.0922% ( 1) 00:43:40.194 16.091 - 16.213: 98.1049% ( 1) 00:43:40.194 16.213 - 16.335: 98.1301% ( 2) 00:43:40.194 16.335 - 16.457: 98.1428% ( 1) 00:43:40.194 16.457 - 16.579: 98.1807% ( 3) 00:43:40.194 16.579 - 16.701: 98.1933% ( 1) 00:43:40.194 16.701 - 16.823: 98.2059% ( 1) 00:43:40.194 17.189 - 17.310: 98.2312% ( 2) 00:43:40.194 17.310 - 17.432: 98.2565% ( 2) 00:43:40.194 17.432 - 17.554: 98.2817% ( 2) 00:43:40.194 17.554 - 17.676: 98.3070% ( 2) 00:43:40.194 17.676 - 17.798: 98.3196% ( 1) 00:43:40.194 17.920 - 18.042: 98.3955% ( 6) 00:43:40.194 18.042 - 18.164: 98.4207% ( 2) 00:43:40.194 18.164 - 18.286: 98.4586% ( 3) 00:43:40.194 18.286 - 18.408: 98.4839% ( 2) 00:43:40.194 18.408 - 18.530: 98.5092% ( 2) 00:43:40.194 18.773 - 18.895: 98.5471% ( 3) 00:43:40.194 18.895 - 19.017: 98.5723% ( 2) 00:43:40.194 19.017 - 19.139: 98.5850% ( 1) 00:43:40.194 19.139 - 19.261: 98.6229% ( 3) 00:43:40.194 19.261 - 19.383: 98.6355% ( 1) 00:43:40.194 19.383 - 19.505: 98.6860% ( 4) 00:43:40.194 19.627 - 19.749: 98.7113% ( 2) 00:43:40.194 20.114 - 20.236: 98.7239% ( 1) 00:43:40.194 20.236 - 20.358: 98.7366% ( 1) 00:43:40.194 21.821 - 21.943: 98.7492% ( 1) 00:43:40.194 22.309 - 22.430: 98.7618% ( 1) 00:43:40.194 22.430 - 22.552: 98.7997% ( 3) 00:43:40.194 22.552 - 22.674: 98.8250% ( 2) 00:43:40.194 22.674 - 22.796: 98.8629% ( 3) 00:43:40.194 22.796 - 22.918: 98.8882% ( 2) 00:43:40.194 22.918 - 23.040: 98.9261% ( 3) 00:43:40.194 23.040 - 23.162: 98.9640% ( 3) 00:43:40.194 24.015 - 24.137: 98.9766% ( 1) 00:43:40.194 24.259 - 24.381: 99.0145% ( 3) 00:43:40.194 24.381 - 24.503: 99.1661% ( 12) 00:43:40.194 24.503 - 24.625: 99.2672% ( 8) 00:43:40.194 24.625 - 24.747: 99.4188% ( 12) 00:43:40.194 24.747 - 24.869: 99.5325% ( 9) 00:43:40.194 24.869 - 24.990: 99.6336% ( 8) 00:43:40.194 24.990 - 25.112: 99.6968% ( 5) 00:43:40.194 25.112 - 25.234: 99.7220% ( 2) 00:43:40.194 25.234 - 25.356: 99.7473% ( 2) 00:43:40.194 25.356 - 25.478: 99.7599% ( 1) 00:43:40.194 25.600 - 25.722: 99.7726% ( 1) 00:43:40.194 26.453 - 26.575: 99.7852% ( 1) 00:43:40.194 26.941 - 27.063: 99.7979% ( 1) 00:43:40.194 28.282 - 28.404: 99.8105% ( 1) 00:43:40.194 28.891 - 29.013: 99.8358% ( 2) 00:43:40.194 29.989 - 30.110: 99.8484% ( 1) 00:43:40.194 30.598 - 30.720: 99.8610% ( 1) 00:43:40.194 31.086 - 31.208: 99.8737% ( 1) 00:43:40.194 32.670 - 32.914: 99.8863% ( 1) 00:43:40.194 32.914 - 33.158: 99.8989% ( 1) 00:43:40.194 36.328 - 36.571: 99.9116% ( 1) 00:43:40.194 39.985 - 40.229: 99.9242% ( 1) 00:43:40.194 40.960 - 41.204: 99.9368% ( 1) 00:43:40.194 43.398 - 43.642: 99.9495% ( 1) 00:43:40.194 73.143 - 73.630: 99.9621% ( 1) 00:43:40.194 95.573 - 96.061: 99.9747% ( 1) 00:43:40.194 105.813 - 106.301: 99.9874% ( 1) 00:43:40.194 109.714 - 110.202: 100.0000% ( 1) 00:43:40.194 00:43:40.194 Complete histogram 00:43:40.194 ================== 00:43:40.194 Range in us Cumulative Count 00:43:40.194 7.741 - 7.771: 0.0758% ( 6) 00:43:40.194 7.771 - 7.802: 0.3159% ( 19) 00:43:40.194 7.802 - 7.863: 0.7707% ( 36) 00:43:40.194 7.863 - 7.924: 1.2003% ( 34) 00:43:40.195 7.924 - 7.985: 1.7309% ( 42) 00:43:40.195 7.985 - 8.046: 2.2868% ( 44) 00:43:40.195 8.046 - 8.107: 2.9311% ( 51) 00:43:40.195 8.107 - 8.168: 6.4814% ( 281) 00:43:40.195 8.168 - 8.229: 13.7713% ( 577) 00:43:40.195 8.229 - 8.290: 18.6860% ( 389) 00:43:40.195 8.290 - 8.350: 26.8225% ( 644) 00:43:40.195 8.350 - 8.411: 43.8661% ( 1349) 00:43:40.195 8.411 - 8.472: 56.9931% ( 1039) 00:43:40.195 8.472 - 8.533: 70.1958% ( 1045) 00:43:40.195 8.533 - 8.594: 78.4713% ( 655) 00:43:40.195 8.594 - 8.655: 85.2937% ( 540) 00:43:40.195 8.655 - 8.716: 90.7391% ( 431) 00:43:40.195 8.716 - 8.777: 93.6829% ( 233) 00:43:40.195 8.777 - 8.838: 95.4264% ( 138) 00:43:40.195 8.838 - 8.899: 96.2982% ( 69) 00:43:40.195 8.899 - 8.960: 96.8667% ( 45) 00:43:40.195 8.960 - 9.021: 97.1826% ( 25) 00:43:40.195 9.021 - 9.082: 97.3089% ( 10) 00:43:40.195 9.082 - 9.143: 97.4100% ( 8) 00:43:40.195 9.143 - 9.204: 97.4858% ( 6) 00:43:40.195 9.204 - 9.265: 97.5237% ( 3) 00:43:40.195 9.265 - 9.326: 97.5616% ( 3) 00:43:40.195 9.326 - 9.387: 97.5742% ( 1) 00:43:40.195 9.387 - 9.448: 97.6248% ( 4) 00:43:40.195 9.448 - 9.509: 97.6500% ( 2) 00:43:40.195 9.509 - 9.570: 97.6627% ( 1) 00:43:40.195 9.570 - 9.630: 97.6879% ( 2) 00:43:40.195 9.630 - 9.691: 97.7132% ( 2) 00:43:40.195 9.752 - 9.813: 97.7764% ( 5) 00:43:40.195 9.813 - 9.874: 97.7890% ( 1) 00:43:40.195 9.996 - 10.057: 97.8016% ( 1) 00:43:40.195 10.057 - 10.118: 97.8269% ( 2) 00:43:40.195 10.118 - 10.179: 97.8395% ( 1) 00:43:40.195 10.179 - 10.240: 97.8648% ( 2) 00:43:40.195 10.362 - 10.423: 97.8774% ( 1) 00:43:40.195 10.667 - 10.728: 97.8901% ( 1) 00:43:40.195 10.850 - 10.910: 97.9027% ( 1) 00:43:40.195 11.398 - 11.459: 97.9280% ( 2) 00:43:40.195 11.459 - 11.520: 97.9406% ( 1) 00:43:40.195 11.764 - 11.825: 97.9785% ( 3) 00:43:40.195 11.825 - 11.886: 97.9912% ( 1) 00:43:40.195 11.947 - 12.008: 98.0038% ( 1) 00:43:40.195 12.008 - 12.069: 98.0164% ( 1) 00:43:40.195 12.130 - 12.190: 98.0291% ( 1) 00:43:40.195 12.190 - 12.251: 98.0417% ( 1) 00:43:40.195 12.373 - 12.434: 98.0543% ( 1) 00:43:40.195 12.434 - 12.495: 98.0670% ( 1) 00:43:40.195 12.678 - 12.739: 98.0796% ( 1) 00:43:40.195 12.739 - 12.800: 98.0922% ( 1) 00:43:40.195 12.922 - 12.983: 98.1049% ( 1) 00:43:40.195 12.983 - 13.044: 98.1175% ( 1) 00:43:40.195 13.044 - 13.105: 98.1301% ( 1) 00:43:40.195 13.105 - 13.166: 98.1428% ( 1) 00:43:40.195 13.227 - 13.288: 98.1554% ( 1) 00:43:40.195 13.531 - 13.592: 98.1680% ( 1) 00:43:40.195 13.592 - 13.653: 98.1807% ( 1) 00:43:40.195 13.775 - 13.836: 98.2059% ( 2) 00:43:40.195 13.836 - 13.897: 98.2438% ( 3) 00:43:40.195 13.897 - 13.958: 98.2691% ( 2) 00:43:40.195 14.019 - 14.080: 98.2817% ( 1) 00:43:40.195 14.080 - 14.141: 98.2944% ( 1) 00:43:40.195 14.263 - 14.324: 98.3323% ( 3) 00:43:40.195 14.324 - 14.385: 98.3575% ( 2) 00:43:40.195 14.385 - 14.446: 98.3828% ( 2) 00:43:40.195 14.446 - 14.507: 98.3955% ( 1) 00:43:40.195 14.507 - 14.568: 98.4207% ( 2) 00:43:40.195 14.568 - 14.629: 98.4460% ( 2) 00:43:40.195 14.750 - 14.811: 98.4586% ( 1) 00:43:40.195 14.933 - 14.994: 98.4839% ( 2) 00:43:40.195 15.116 - 15.177: 98.4965% ( 1) 00:43:40.195 15.177 - 15.238: 98.5092% ( 1) 00:43:40.195 15.238 - 15.299: 98.5218% ( 1) 00:43:40.195 15.482 - 15.543: 98.5344% ( 1) 00:43:40.195 15.970 - 16.091: 98.5471% ( 1) 00:43:40.195 16.823 - 16.945: 98.5597% ( 1) 00:43:40.195 18.286 - 18.408: 98.5723% ( 1) 00:43:40.195 18.408 - 18.530: 98.5850% ( 1) 00:43:40.195 18.530 - 18.651: 98.6102% ( 2) 00:43:40.195 18.651 - 18.773: 98.7113% ( 8) 00:43:40.195 18.773 - 18.895: 98.7239% ( 1) 00:43:40.195 19.017 - 19.139: 98.7366% ( 1) 00:43:40.195 19.139 - 19.261: 98.7492% ( 1) 00:43:40.195 19.505 - 19.627: 98.7745% ( 2) 00:43:40.195 19.870 - 19.992: 98.7871% ( 1) 00:43:40.195 19.992 - 20.114: 98.7997% ( 1) 00:43:40.195 20.114 - 20.236: 98.8503% ( 4) 00:43:40.195 20.236 - 20.358: 99.1156% ( 21) 00:43:40.195 20.358 - 20.480: 99.3809% ( 21) 00:43:40.195 20.480 - 20.602: 99.5578% ( 14) 00:43:40.195 20.602 - 20.724: 99.7220% ( 13) 00:43:40.195 20.724 - 20.846: 99.7726% ( 4) 00:43:40.195 20.968 - 21.090: 99.7979% ( 2) 00:43:40.195 24.137 - 24.259: 99.8105% ( 1) 00:43:40.195 24.381 - 24.503: 99.8231% ( 1) 00:43:40.195 24.625 - 24.747: 99.8358% ( 1) 00:43:40.195 25.356 - 25.478: 99.8484% ( 1) 00:43:40.195 25.722 - 25.844: 99.8610% ( 1) 00:43:40.195 25.844 - 25.966: 99.8737% ( 1) 00:43:40.195 29.013 - 29.135: 99.8863% ( 1) 00:43:40.195 32.427 - 32.670: 99.8989% ( 1) 00:43:40.195 32.670 - 32.914: 99.9116% ( 1) 00:43:40.195 41.691 - 41.935: 99.9242% ( 1) 00:43:40.195 44.130 - 44.373: 99.9368% ( 1) 00:43:40.195 101.912 - 102.400: 99.9495% ( 1) 00:43:40.195 103.863 - 104.350: 99.9621% ( 1) 00:43:40.195 106.301 - 106.789: 99.9747% ( 1) 00:43:40.195 112.640 - 113.128: 99.9874% ( 1) 00:43:40.195 569.539 - 573.440: 100.0000% ( 1) 00:43:40.195 00:43:40.195 00:43:40.195 real 0m1.288s 00:43:40.195 user 0m1.100s 00:43:40.195 sys 0m0.130s 00:43:40.195 19:10:40 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:40.195 19:10:40 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:43:40.195 ************************************ 00:43:40.195 END TEST nvme_overhead 00:43:40.195 ************************************ 00:43:40.195 19:10:40 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:43:40.195 19:10:40 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:43:40.195 19:10:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:40.195 19:10:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:40.195 ************************************ 00:43:40.195 START TEST nvme_arbitration 00:43:40.195 ************************************ 00:43:40.195 19:10:40 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:43:43.481 Initializing NVMe Controllers 00:43:43.481 Attached to 0000:00:10.0 00:43:43.481 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:43:43.481 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:43:43.481 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:43:43.481 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:43:43.481 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:43:43.481 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:43:43.481 Initialization complete. Launching workers. 00:43:43.481 Starting thread on core 1 with urgent priority queue 00:43:43.481 Starting thread on core 2 with urgent priority queue 00:43:43.481 Starting thread on core 0 with urgent priority queue 00:43:43.481 Starting thread on core 3 with urgent priority queue 00:43:43.481 QEMU NVMe Ctrl (12340 ) core 0: 1194.67 IO/s 83.71 secs/100000 ios 00:43:43.481 QEMU NVMe Ctrl (12340 ) core 1: 1130.67 IO/s 88.44 secs/100000 ios 00:43:43.481 QEMU NVMe Ctrl (12340 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:43:43.481 QEMU NVMe Ctrl (12340 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:43:43.481 ======================================================== 00:43:43.481 00:43:43.481 00:43:43.481 real 0m3.500s 00:43:43.481 user 0m9.451s 00:43:43.481 sys 0m0.161s 00:43:43.481 19:10:44 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:43.481 19:10:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:43:43.481 ************************************ 00:43:43.481 END TEST nvme_arbitration 00:43:43.481 ************************************ 00:43:43.739 19:10:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:43:43.739 19:10:44 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:43:43.739 19:10:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:43.739 19:10:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:43.739 ************************************ 00:43:43.739 START TEST nvme_single_aen 00:43:43.739 ************************************ 00:43:43.739 19:10:44 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:43:43.997 Asynchronous Event Request test 00:43:43.997 Attached to 0000:00:10.0 00:43:43.997 Reset controller to setup AER completions for this process 00:43:43.997 Registering asynchronous event callbacks... 00:43:43.997 Getting orig temperature thresholds of all controllers 00:43:43.997 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:43.997 Setting all controllers temperature threshold low to trigger AER 00:43:43.997 Waiting for all controllers temperature threshold to be set lower 00:43:43.997 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:43.997 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:43:43.997 Waiting for all controllers to trigger AER and reset threshold 00:43:43.997 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:43.997 Cleaning up... 00:43:43.997 00:43:43.997 real 0m0.297s 00:43:43.997 user 0m0.087s 00:43:43.997 sys 0m0.135s 00:43:43.997 19:10:44 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:43.997 19:10:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:43:43.997 ************************************ 00:43:43.997 END TEST nvme_single_aen 00:43:43.997 ************************************ 00:43:43.997 19:10:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:43:43.997 19:10:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:43.997 19:10:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:43.997 19:10:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:43.997 ************************************ 00:43:43.997 START TEST nvme_doorbell_aers 00:43:43.997 ************************************ 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:43:43.997 19:10:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:43:44.255 [2024-07-25 19:10:44.813480] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170386) is not found. Dropping the request. 00:43:54.232 Executing: test_write_invalid_db 00:43:54.232 Waiting for AER completion... 00:43:54.232 Failure: test_write_invalid_db 00:43:54.232 00:43:54.232 Executing: test_invalid_db_write_overflow_sq 00:43:54.232 Waiting for AER completion... 00:43:54.232 Failure: test_invalid_db_write_overflow_sq 00:43:54.232 00:43:54.232 Executing: test_invalid_db_write_overflow_cq 00:43:54.232 Waiting for AER completion... 00:43:54.232 Failure: test_invalid_db_write_overflow_cq 00:43:54.232 00:43:54.232 00:43:54.232 real 0m10.121s 00:43:54.232 user 0m7.472s 00:43:54.232 sys 0m2.611s 00:43:54.232 19:10:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:54.232 19:10:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:43:54.232 ************************************ 00:43:54.232 END TEST nvme_doorbell_aers 00:43:54.232 ************************************ 00:43:54.232 19:10:54 nvme -- nvme/nvme.sh@97 -- # uname 00:43:54.232 19:10:54 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:43:54.232 19:10:54 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:43:54.232 19:10:54 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:43:54.232 19:10:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:54.232 19:10:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:54.232 ************************************ 00:43:54.232 START TEST nvme_multi_aen 00:43:54.232 ************************************ 00:43:54.232 19:10:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:43:54.491 [2024-07-25 19:10:54.903632] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170386) is not found. Dropping the request. 00:43:54.491 [2024-07-25 19:10:54.903759] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170386) is not found. Dropping the request. 00:43:54.491 [2024-07-25 19:10:54.903786] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170386) is not found. Dropping the request. 00:43:54.491 Child process pid: 170570 00:43:54.750 [Child] Asynchronous Event Request test 00:43:54.750 [Child] Attached to 0000:00:10.0 00:43:54.750 [Child] Registering asynchronous event callbacks... 00:43:54.750 [Child] Getting orig temperature thresholds of all controllers 00:43:54.750 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:54.750 [Child] Waiting for all controllers to trigger AER and reset threshold 00:43:54.750 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:54.750 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:54.750 [Child] Cleaning up... 00:43:54.750 Asynchronous Event Request test 00:43:54.750 Attached to 0000:00:10.0 00:43:54.750 Reset controller to setup AER completions for this process 00:43:54.750 Registering asynchronous event callbacks... 00:43:54.750 Getting orig temperature thresholds of all controllers 00:43:54.750 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:54.750 Setting all controllers temperature threshold low to trigger AER 00:43:54.750 Waiting for all controllers temperature threshold to be set lower 00:43:54.750 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:54.750 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:43:54.750 Waiting for all controllers to trigger AER and reset threshold 00:43:54.750 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:54.750 Cleaning up... 00:43:55.008 00:43:55.008 real 0m0.720s 00:43:55.008 user 0m0.210s 00:43:55.008 sys 0m0.306s 00:43:55.008 19:10:55 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:55.008 19:10:55 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:43:55.008 ************************************ 00:43:55.008 END TEST nvme_multi_aen 00:43:55.008 ************************************ 00:43:55.008 19:10:55 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:43:55.008 19:10:55 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:55.008 19:10:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:55.008 19:10:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:55.008 ************************************ 00:43:55.008 START TEST nvme_startup 00:43:55.008 ************************************ 00:43:55.008 19:10:55 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:43:55.267 Initializing NVMe Controllers 00:43:55.267 Attached to 0000:00:10.0 00:43:55.267 Initialization complete. 00:43:55.267 Time used:234693.609 (us). 00:43:55.267 00:43:55.267 real 0m0.341s 00:43:55.267 user 0m0.118s 00:43:55.267 sys 0m0.151s 00:43:55.267 19:10:55 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:55.267 19:10:55 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:43:55.267 ************************************ 00:43:55.267 END TEST nvme_startup 00:43:55.267 ************************************ 00:43:55.267 19:10:55 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:43:55.267 19:10:55 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:55.267 19:10:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:55.267 19:10:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:55.267 ************************************ 00:43:55.267 START TEST nvme_multi_secondary 00:43:55.267 ************************************ 00:43:55.267 19:10:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:43:55.267 19:10:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=170635 00:43:55.267 19:10:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:43:55.267 19:10:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=170636 00:43:55.267 19:10:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:43:55.267 19:10:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:43:59.460 Initializing NVMe Controllers 00:43:59.460 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:59.460 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:43:59.460 Initialization complete. Launching workers. 00:43:59.460 ======================================================== 00:43:59.460 Latency(us) 00:43:59.460 Device Information : IOPS MiB/s Average min max 00:43:59.460 PCIE (0000:00:10.0) NSID 1 from core 2: 16106.00 62.91 992.39 171.36 24389.29 00:43:59.460 ======================================================== 00:43:59.460 Total : 16106.00 62.91 992.39 171.36 24389.29 00:43:59.460 00:43:59.460 19:10:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 170635 00:43:59.460 Initializing NVMe Controllers 00:43:59.460 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:59.460 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:43:59.460 Initialization complete. Launching workers. 00:43:59.460 ======================================================== 00:43:59.460 Latency(us) 00:43:59.460 Device Information : IOPS MiB/s Average min max 00:43:59.460 PCIE (0000:00:10.0) NSID 1 from core 1: 36372.62 142.08 439.57 170.42 1211.72 00:43:59.460 ======================================================== 00:43:59.460 Total : 36372.62 142.08 439.57 170.42 1211.72 00:43:59.460 00:44:00.840 Initializing NVMe Controllers 00:44:00.840 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:00.840 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:00.840 Initialization complete. Launching workers. 00:44:00.840 ======================================================== 00:44:00.840 Latency(us) 00:44:00.840 Device Information : IOPS MiB/s Average min max 00:44:00.840 PCIE (0000:00:10.0) NSID 1 from core 0: 43270.31 169.02 369.51 121.14 4107.20 00:44:00.840 ======================================================== 00:44:00.840 Total : 43270.31 169.02 369.51 121.14 4107.20 00:44:00.840 00:44:00.840 19:11:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 170636 00:44:00.840 19:11:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=170711 00:44:00.840 19:11:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:44:00.840 19:11:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=170712 00:44:00.840 19:11:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:44:00.840 19:11:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:44:04.128 Initializing NVMe Controllers 00:44:04.128 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:04.128 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:44:04.128 Initialization complete. Launching workers. 00:44:04.128 ======================================================== 00:44:04.128 Latency(us) 00:44:04.128 Device Information : IOPS MiB/s Average min max 00:44:04.128 PCIE (0000:00:10.0) NSID 1 from core 1: 36392.87 142.16 439.37 152.52 1544.76 00:44:04.128 ======================================================== 00:44:04.128 Total : 36392.87 142.16 439.37 152.52 1544.76 00:44:04.128 00:44:04.385 Initializing NVMe Controllers 00:44:04.385 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:04.385 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:44:04.385 Initialization complete. Launching workers. 00:44:04.385 ======================================================== 00:44:04.385 Latency(us) 00:44:04.385 Device Information : IOPS MiB/s Average min max 00:44:04.385 PCIE (0000:00:10.0) NSID 1 from core 0: 36612.61 143.02 436.77 145.14 5347.63 00:44:04.385 ======================================================== 00:44:04.385 Total : 36612.61 143.02 436.77 145.14 5347.63 00:44:04.385 00:44:06.288 Initializing NVMe Controllers 00:44:06.288 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:06.288 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:44:06.288 Initialization complete. Launching workers. 00:44:06.288 ======================================================== 00:44:06.288 Latency(us) 00:44:06.288 Device Information : IOPS MiB/s Average min max 00:44:06.288 PCIE (0000:00:10.0) NSID 1 from core 2: 18497.10 72.25 863.92 125.16 24419.18 00:44:06.288 ======================================================== 00:44:06.288 Total : 18497.10 72.25 863.92 125.16 24419.18 00:44:06.288 00:44:06.288 19:11:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 170711 00:44:06.288 19:11:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 170712 00:44:06.288 00:44:06.288 real 0m10.736s 00:44:06.288 user 0m18.727s 00:44:06.288 sys 0m0.978s 00:44:06.288 19:11:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:06.288 19:11:06 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:44:06.288 ************************************ 00:44:06.288 END TEST nvme_multi_secondary 00:44:06.288 ************************************ 00:44:06.288 19:11:06 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:44:06.288 19:11:06 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:44:06.288 19:11:06 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/169936 ]] 00:44:06.288 19:11:06 nvme -- common/autotest_common.sh@1090 -- # kill 169936 00:44:06.288 19:11:06 nvme -- common/autotest_common.sh@1091 -- # wait 169936 00:44:06.288 [2024-07-25 19:11:06.602464] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170569) is not found. Dropping the request. 00:44:06.288 [2024-07-25 19:11:06.602599] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170569) is not found. Dropping the request. 00:44:06.288 [2024-07-25 19:11:06.602634] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170569) is not found. Dropping the request. 00:44:06.288 [2024-07-25 19:11:06.602688] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 170569) is not found. Dropping the request. 00:44:06.547 [2024-07-25 19:11:07.024527] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:44:06.547 19:11:07 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:44:06.548 19:11:07 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:44:06.548 19:11:07 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:44:06.548 19:11:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:06.548 19:11:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:06.548 19:11:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:06.548 ************************************ 00:44:06.548 START TEST bdev_nvme_reset_stuck_adm_cmd 00:44:06.548 ************************************ 00:44:06.548 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:44:06.807 * Looking for test storage... 00:44:06.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=170863 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 170863 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 170863 ']' 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:06.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:06.807 19:11:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:44:06.808 [2024-07-25 19:11:07.312219] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:06.808 [2024-07-25 19:11:07.312382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170863 ] 00:44:07.067 [2024-07-25 19:11:07.509883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:07.326 [2024-07-25 19:11:07.846303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:07.326 [2024-07-25 19:11:07.846636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:07.326 [2024-07-25 19:11:07.846516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:44:07.326 [2024-07-25 19:11:07.846639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:08.263 nvme0n1 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_QmgXM.txt 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:08.263 true 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721934668 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=170896 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:44:08.263 19:11:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:10.793 [2024-07-25 19:11:10.828044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:44:10.793 [2024-07-25 19:11:10.828654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:10.793 [2024-07-25 19:11:10.828805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:44:10.793 [2024-07-25 19:11:10.828935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:10.793 [2024-07-25 19:11:10.831174] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:10.793 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 170896 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 170896 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 170896 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_QmgXM.txt 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_QmgXM.txt 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 170863 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 170863 ']' 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 170863 00:44:10.793 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:44:10.794 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:10.794 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 170863 00:44:10.794 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:10.794 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:10.794 killing process with pid 170863 00:44:10.794 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 170863' 00:44:10.794 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 170863 00:44:10.794 19:11:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 170863 00:44:13.379 19:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:44:13.379 19:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:44:13.379 00:44:13.379 real 0m6.677s 00:44:13.379 user 0m22.737s 00:44:13.379 sys 0m0.853s 00:44:13.379 ************************************ 00:44:13.379 END TEST bdev_nvme_reset_stuck_adm_cmd 00:44:13.379 ************************************ 00:44:13.379 19:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:13.379 19:11:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:13.379 19:11:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:44:13.379 19:11:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:44:13.379 19:11:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:13.379 19:11:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:13.379 19:11:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:13.379 ************************************ 00:44:13.379 START TEST nvme_fio 00:44:13.379 ************************************ 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:13.379 19:11:13 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:44:13.379 19:11:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:44:13.637 19:11:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:44:13.637 19:11:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:44:13.896 19:11:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:44:13.896 19:11:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:44:13.896 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:14.155 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:44:14.155 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:44:14.155 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:44:14.155 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:44:14.155 19:11:14 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:44:14.155 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:44:14.155 fio-3.35 00:44:14.155 Starting 1 thread 00:44:17.441 00:44:17.441 test: (groupid=0, jobs=1): err= 0: pid=171043: Thu Jul 25 19:11:17 2024 00:44:17.441 read: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec) 00:44:17.441 slat (usec): min=3, max=126, avg= 5.17, stdev= 1.83 00:44:17.441 clat (usec): min=214, max=11122, avg=3271.06, stdev=481.33 00:44:17.441 lat (usec): min=219, max=11249, avg=3276.23, stdev=481.86 00:44:17.441 clat percentiles (usec): 00:44:17.441 | 1.00th=[ 2409], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:44:17.441 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:44:17.441 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3884], 95.00th=[ 3949], 00:44:17.441 | 99.00th=[ 4178], 99.50th=[ 5604], 99.90th=[ 8455], 99.95th=[ 9503], 00:44:17.441 | 99.99th=[11076] 00:44:17.441 bw ( KiB/s): min=73736, max=82184, per=100.00%, avg=79234.67, stdev=4766.18, samples=3 00:44:17.441 iops : min=18434, max=20546, avg=19808.67, stdev=1191.55, samples=3 00:44:17.441 write: IOPS=19.4k, BW=75.9MiB/s (79.6MB/s)(152MiB/2001msec); 0 zone resets 00:44:17.441 slat (nsec): min=3839, max=85852, avg=5325.16, stdev=1884.43 00:44:17.441 clat (usec): min=275, max=11058, avg=3277.94, stdev=482.08 00:44:17.441 lat (usec): min=280, max=11071, avg=3283.26, stdev=482.53 00:44:17.441 clat percentiles (usec): 00:44:17.441 | 1.00th=[ 2376], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3064], 00:44:17.441 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:44:17.441 | 70.00th=[ 3261], 80.00th=[ 3425], 90.00th=[ 3884], 95.00th=[ 3982], 00:44:17.441 | 99.00th=[ 4228], 99.50th=[ 5473], 99.90th=[ 8717], 99.95th=[ 9634], 00:44:17.441 | 99.99th=[10159] 00:44:17.441 bw ( KiB/s): min=73704, max=82152, per=100.00%, avg=79266.67, stdev=4818.53, samples=3 00:44:17.441 iops : min=18426, max=20538, avg=19816.67, stdev=1204.63, samples=3 00:44:17.441 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:44:17.441 lat (msec) : 2=0.44%, 4=95.96%, 10=3.53%, 20=0.02% 00:44:17.441 cpu : usr=99.85%, sys=0.05%, ctx=39, majf=0, minf=34 00:44:17.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:44:17.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:17.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:17.441 issued rwts: total=38957,38890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:17.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:17.441 00:44:17.441 Run status group 0 (all jobs): 00:44:17.441 READ: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (160MB), run=2001-2001msec 00:44:17.441 WRITE: bw=75.9MiB/s (79.6MB/s), 75.9MiB/s-75.9MiB/s (79.6MB/s-79.6MB/s), io=152MiB (159MB), run=2001-2001msec 00:44:18.010 ----------------------------------------------------- 00:44:18.010 Suppressions used: 00:44:18.010 count bytes template 00:44:18.010 1 32 /usr/src/fio/parse.c 00:44:18.010 ----------------------------------------------------- 00:44:18.010 00:44:18.010 ************************************ 00:44:18.010 END TEST nvme_fio 00:44:18.010 ************************************ 00:44:18.010 19:11:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:44:18.010 19:11:18 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:44:18.010 00:44:18.010 real 0m4.563s 00:44:18.010 user 0m3.649s 00:44:18.010 sys 0m0.697s 00:44:18.010 19:11:18 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:18.010 19:11:18 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:44:18.010 ************************************ 00:44:18.010 END TEST nvme 00:44:18.010 ************************************ 00:44:18.010 00:44:18.010 real 0m49.238s 00:44:18.010 user 2m9.459s 00:44:18.010 sys 0m10.840s 00:44:18.010 19:11:18 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:18.010 19:11:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:18.010 19:11:18 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:44:18.010 19:11:18 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:44:18.010 19:11:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:18.010 19:11:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:18.010 19:11:18 -- common/autotest_common.sh@10 -- # set +x 00:44:18.010 ************************************ 00:44:18.010 START TEST nvme_scc 00:44:18.010 ************************************ 00:44:18.010 19:11:18 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:44:18.270 * Looking for test storage... 00:44:18.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:18.270 19:11:18 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:18.270 19:11:18 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:18.270 19:11:18 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:18.270 19:11:18 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:18.270 19:11:18 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.270 19:11:18 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.270 19:11:18 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.270 19:11:18 nvme_scc -- paths/export.sh@5 -- # export PATH 00:44:18.270 19:11:18 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:44:18.270 19:11:18 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:44:18.270 19:11:18 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:18.270 19:11:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:44:18.270 19:11:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:44:18.270 19:11:18 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:44:18.270 19:11:18 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:18.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:18.530 Waiting for block devices as requested 00:44:18.790 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:18.790 19:11:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:44:18.790 19:11:19 nvme_scc -- scripts/common.sh@15 -- # local i 00:44:18.790 19:11:19 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:44:18.790 19:11:19 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:44:18.790 19:11:19 nvme_scc -- scripts/common.sh@24 -- # return 0 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:44:18.790 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:18.791 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:18.792 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.053 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.054 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.055 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:44:19.056 19:11:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:44:19.056 19:11:19 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:44:19.057 19:11:19 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:44:19.057 19:11:19 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:44:19.057 19:11:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:44:19.057 19:11:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:44:19.057 19:11:19 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:19.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:19.626 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:21.006 19:11:21 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:44:21.006 19:11:21 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:21.006 19:11:21 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:21.006 19:11:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:44:21.006 ************************************ 00:44:21.006 START TEST nvme_simple_copy 00:44:21.006 ************************************ 00:44:21.006 19:11:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:44:21.006 Initializing NVMe Controllers 00:44:21.006 Attaching to 0000:00:10.0 00:44:21.006 Controller supports SCC. Attached to 0000:00:10.0 00:44:21.006 Namespace ID: 1 size: 5GB 00:44:21.006 Initialization complete. 00:44:21.006 00:44:21.006 Controller QEMU NVMe Ctrl (12340 ) 00:44:21.006 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:44:21.006 Namespace Block Size:4096 00:44:21.006 Writing LBAs 0 to 63 with Random Data 00:44:21.006 Copied LBAs from 0 - 63 to the Destination LBA 256 00:44:21.006 LBAs matching Written Data: 64 00:44:21.265 00:44:21.265 real 0m0.334s 00:44:21.265 user 0m0.128s 00:44:21.265 sys 0m0.108s 00:44:21.265 19:11:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:21.265 19:11:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:44:21.265 ************************************ 00:44:21.265 END TEST nvme_simple_copy 00:44:21.265 ************************************ 00:44:21.265 ************************************ 00:44:21.265 END TEST nvme_scc 00:44:21.265 ************************************ 00:44:21.265 00:44:21.265 real 0m3.152s 00:44:21.265 user 0m0.820s 00:44:21.265 sys 0m2.183s 00:44:21.265 19:11:21 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:21.265 19:11:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:44:21.265 19:11:21 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:44:21.265 19:11:21 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:44:21.265 19:11:21 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:44:21.266 19:11:21 -- spdk/autotest.sh@236 -- # [[ 0 -eq 1 ]] 00:44:21.266 19:11:21 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:44:21.266 19:11:21 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:44:21.266 19:11:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:21.266 19:11:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:21.266 19:11:21 -- common/autotest_common.sh@10 -- # set +x 00:44:21.266 ************************************ 00:44:21.266 START TEST nvme_rpc 00:44:21.266 ************************************ 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:44:21.266 * Looking for test storage... 00:44:21.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:21.266 19:11:21 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:21.266 19:11:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:44:21.266 19:11:21 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:44:21.525 19:11:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:44:21.525 19:11:21 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=171534 00:44:21.525 19:11:21 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:44:21.525 19:11:21 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:44:21.525 19:11:21 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 171534 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 171534 ']' 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:21.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:21.525 19:11:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:21.525 [2024-07-25 19:11:21.975826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:21.525 [2024-07-25 19:11:21.975982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171534 ] 00:44:21.784 [2024-07-25 19:11:22.136841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:21.784 [2024-07-25 19:11:22.357849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:21.784 [2024-07-25 19:11:22.357856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:22.722 19:11:23 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:22.722 19:11:23 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:44:22.722 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:44:22.982 Nvme0n1 00:44:22.982 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:44:22.982 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:44:23.240 request: 00:44:23.240 { 00:44:23.240 "bdev_name": "Nvme0n1", 00:44:23.240 "filename": "non_existing_file", 00:44:23.240 "method": "bdev_nvme_apply_firmware", 00:44:23.240 "req_id": 1 00:44:23.240 } 00:44:23.240 Got JSON-RPC error response 00:44:23.240 response: 00:44:23.240 { 00:44:23.240 "code": -32603, 00:44:23.240 "message": "open file failed." 00:44:23.240 } 00:44:23.240 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:44:23.240 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:44:23.240 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:44:23.499 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:44:23.499 19:11:23 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 171534 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 171534 ']' 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 171534 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 171534 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 171534' 00:44:23.499 killing process with pid 171534 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@969 -- # kill 171534 00:44:23.499 19:11:23 nvme_rpc -- common/autotest_common.sh@974 -- # wait 171534 00:44:26.032 ************************************ 00:44:26.032 END TEST nvme_rpc 00:44:26.032 ************************************ 00:44:26.032 00:44:26.032 real 0m4.808s 00:44:26.032 user 0m8.554s 00:44:26.032 sys 0m0.833s 00:44:26.032 19:11:26 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:26.032 19:11:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:26.032 19:11:26 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:44:26.032 19:11:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:26.032 19:11:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:26.032 19:11:26 -- common/autotest_common.sh@10 -- # set +x 00:44:26.032 ************************************ 00:44:26.032 START TEST nvme_rpc_timeouts 00:44:26.032 ************************************ 00:44:26.032 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:44:26.291 * Looking for test storage... 00:44:26.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:26.292 19:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:26.292 19:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_171623 00:44:26.292 19:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_171623 00:44:26.292 19:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=171649 00:44:26.292 19:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:44:26.292 19:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 171649 00:44:26.292 19:11:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:44:26.292 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 171649 ']' 00:44:26.292 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:26.292 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:26.292 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:26.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:26.292 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:26.292 19:11:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:44:26.292 [2024-07-25 19:11:26.821964] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:26.292 [2024-07-25 19:11:26.822205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171649 ] 00:44:26.550 [2024-07-25 19:11:27.006862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:26.809 [2024-07-25 19:11:27.224400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:26.809 [2024-07-25 19:11:27.224414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.747 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:27.747 Checking default timeout settings: 00:44:27.747 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:44:27.747 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:44:27.747 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:44:28.006 Making settings changes with rpc: 00:44:28.006 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:44:28.006 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:44:28.266 Check default vs. modified settings: 00:44:28.266 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:44:28.266 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_171623 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_171623 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:44:28.526 Setting action_on_timeout is changed as expected. 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_171623 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_171623 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:44:28.526 Setting timeout_us is changed as expected. 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_171623 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_171623 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:44:28.526 Setting timeout_admin_us is changed as expected. 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_171623 /tmp/settings_modified_171623 00:44:28.526 19:11:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 171649 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 171649 ']' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 171649 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 171649 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 171649' 00:44:28.526 killing process with pid 171649 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 171649 00:44:28.526 19:11:28 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 171649 00:44:31.820 RPC TIMEOUT SETTING TEST PASSED. 00:44:31.820 19:11:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:44:31.820 00:44:31.820 real 0m5.137s 00:44:31.820 user 0m9.331s 00:44:31.820 sys 0m0.881s 00:44:31.820 19:11:31 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:31.820 ************************************ 00:44:31.820 END TEST nvme_rpc_timeouts 00:44:31.820 ************************************ 00:44:31.820 19:11:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:44:31.820 19:11:31 -- spdk/autotest.sh@247 -- # uname -s 00:44:31.820 19:11:31 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:44:31.820 19:11:31 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:44:31.820 19:11:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:31.820 19:11:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:31.820 19:11:31 -- common/autotest_common.sh@10 -- # set +x 00:44:31.820 ************************************ 00:44:31.820 START TEST sw_hotplug 00:44:31.820 ************************************ 00:44:31.821 19:11:31 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:44:31.821 * Looking for test storage... 00:44:31.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:31.821 19:11:31 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:31.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:31.821 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:44:33.201 19:11:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:44:33.201 19:11:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:44:33.201 19:11:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:44:33.201 19:11:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@230 -- # local class 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@15 -- # local i 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:44:33.201 19:11:33 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:44:33.201 19:11:33 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:44:33.201 19:11:33 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:44:33.201 19:11:33 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:33.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:33.460 Waiting for block devices as requested 00:44:33.460 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:33.719 19:11:34 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:44:33.719 19:11:34 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:33.978 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:44:33.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:34.238 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:44:35.620 19:11:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=172243 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:44:35.620 19:11:35 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:44:35.620 19:11:35 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:44:35.620 19:11:35 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:44:35.620 19:11:35 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:44:35.620 19:11:35 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:44:35.620 19:11:35 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:44:35.620 Initializing NVMe Controllers 00:44:35.620 Attaching to 0000:00:10.0 00:44:35.620 Attached to 0000:00:10.0 00:44:35.620 Initialization complete. Starting I/O... 00:44:35.620 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:44:35.620 00:44:37.000 QEMU NVMe Ctrl (12340 ): 2136 I/Os completed (+2136) 00:44:37.000 00:44:37.570 QEMU NVMe Ctrl (12340 ): 4980 I/Os completed (+2844) 00:44:37.570 00:44:38.951 QEMU NVMe Ctrl (12340 ): 8060 I/Os completed (+3080) 00:44:38.951 00:44:39.890 QEMU NVMe Ctrl (12340 ): 11176 I/Os completed (+3116) 00:44:39.890 00:44:40.828 QEMU NVMe Ctrl (12340 ): 14348 I/Os completed (+3172) 00:44:40.828 00:44:41.396 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:44:41.396 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:44:41.396 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:44:41.396 [2024-07-25 19:11:41.900198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:44:41.396 Controller removed: QEMU NVMe Ctrl (12340 ) 00:44:41.396 [2024-07-25 19:11:41.901618] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.396 [2024-07-25 19:11:41.902709] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.396 [2024-07-25 19:11:41.902849] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.396 [2024-07-25 19:11:41.902898] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.396 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:41.397 [2024-07-25 19:11:41.909015] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.397 [2024-07-25 19:11:41.909150] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.397 [2024-07-25 19:11:41.909209] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.397 [2024-07-25 19:11:41.909305] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:41.397 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:44:41.397 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:44:41.657 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:44:41.657 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:44:41.657 19:11:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:44:41.657 19:11:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:44:41.657 00:44:41.657 19:11:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:44:41.657 19:11:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:44:41.657 Attaching to 0000:00:10.0 00:44:41.657 Attached to 0000:00:10.0 00:44:42.625 QEMU NVMe Ctrl (12340 ): 3128 I/Os completed (+3128) 00:44:42.625 00:44:44.002 QEMU NVMe Ctrl (12340 ): 6296 I/Os completed (+3168) 00:44:44.002 00:44:44.570 QEMU NVMe Ctrl (12340 ): 9456 I/Os completed (+3160) 00:44:44.570 00:44:45.948 QEMU NVMe Ctrl (12340 ): 12596 I/Os completed (+3140) 00:44:45.948 00:44:46.887 QEMU NVMe Ctrl (12340 ): 15696 I/Os completed (+3100) 00:44:46.887 00:44:47.823 QEMU NVMe Ctrl (12340 ): 18876 I/Os completed (+3180) 00:44:47.823 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:44:47.823 [2024-07-25 19:11:48.173155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:44:47.823 Controller removed: QEMU NVMe Ctrl (12340 ) 00:44:47.823 [2024-07-25 19:11:48.175863] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 [2024-07-25 19:11:48.176029] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 [2024-07-25 19:11:48.176081] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 [2024-07-25 19:11:48.176210] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:47.823 [2024-07-25 19:11:48.182280] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 [2024-07-25 19:11:48.182480] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 [2024-07-25 19:11:48.182572] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 [2024-07-25 19:11:48.182633] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:47.823 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:44:47.823 EAL: Scan for (pci) bus failed. 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:44:47.823 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:44:48.082 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:44:48.082 19:11:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:44:48.082 Attaching to 0000:00:10.0 00:44:48.082 Attached to 0000:00:10.0 00:44:48.649 QEMU NVMe Ctrl (12340 ): 2200 I/Os completed (+2200) 00:44:48.649 00:44:49.585 QEMU NVMe Ctrl (12340 ): 5356 I/Os completed (+3156) 00:44:49.585 00:44:50.963 QEMU NVMe Ctrl (12340 ): 8572 I/Os completed (+3216) 00:44:50.963 00:44:51.900 QEMU NVMe Ctrl (12340 ): 11796 I/Os completed (+3224) 00:44:51.900 00:44:52.837 QEMU NVMe Ctrl (12340 ): 14992 I/Os completed (+3196) 00:44:52.837 00:44:53.769 QEMU NVMe Ctrl (12340 ): 18132 I/Os completed (+3140) 00:44:53.769 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:44:54.028 [2024-07-25 19:11:54.446774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:44:54.028 Controller removed: QEMU NVMe Ctrl (12340 ) 00:44:54.028 [2024-07-25 19:11:54.448261] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 [2024-07-25 19:11:54.448358] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 [2024-07-25 19:11:54.448412] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 [2024-07-25 19:11:54.448453] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:54.028 [2024-07-25 19:11:54.454541] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 [2024-07-25 19:11:54.454661] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 [2024-07-25 19:11:54.454708] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 [2024-07-25 19:11:54.454809] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:44:54.028 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:44:54.287 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:44:54.287 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:44:54.287 19:11:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:44:54.287 Attaching to 0000:00:10.0 00:44:54.287 Attached to 0000:00:10.0 00:44:54.287 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:54.287 [2024-07-25 19:11:54.741031] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:45:00.858 19:12:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:45:00.858 19:12:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:00.858 19:12:00 sw_hotplug -- common/autotest_common.sh@717 -- # time=24.84 00:45:00.858 19:12:00 sw_hotplug -- common/autotest_common.sh@718 -- # echo 24.84 00:45:00.858 19:12:00 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:45:00.858 19:12:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.84 00:45:00.858 19:12:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.84 1 00:45:00.858 remove_attach_helper took 24.84s to complete (handling 1 nvme drive(s)) 19:12:00 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 172243 00:45:07.427 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (172243) - No such process 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 172243 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=172590 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:45:07.427 19:12:06 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 172590 00:45:07.427 19:12:06 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 172590 ']' 00:45:07.427 19:12:06 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:07.427 19:12:06 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:07.427 19:12:06 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:07.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:07.427 19:12:06 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:07.427 19:12:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:07.427 [2024-07-25 19:12:06.847838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:07.427 [2024-07-25 19:12:06.848106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172590 ] 00:45:07.427 [2024-07-25 19:12:07.036087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:07.427 [2024-07-25 19:12:07.311380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:45:07.686 19:12:08 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:45:07.686 19:12:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:14.256 19:12:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:14.256 19:12:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:14.256 19:12:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.256 [2024-07-25 19:12:14.325806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:14.256 [2024-07-25 19:12:14.327644] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:14.256 [2024-07-25 19:12:14.327810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:14.256 [2024-07-25 19:12:14.327912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:14.256 [2024-07-25 19:12:14.327988] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:14.256 [2024-07-25 19:12:14.328153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:14.256 [2024-07-25 19:12:14.328211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:14.256 [2024-07-25 19:12:14.328316] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:14.256 [2024-07-25 19:12:14.328404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:14.256 [2024-07-25 19:12:14.328481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:14.256 [2024-07-25 19:12:14.328532] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:14.256 [2024-07-25 19:12:14.328603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:14.256 [2024-07-25 19:12:14.328738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:14.256 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:14.516 19:12:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:14.516 19:12:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:14.516 19:12:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:14.516 19:12:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:14.516 19:12:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:14.775 19:12:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:14.775 19:12:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:21.345 19:12:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.345 19:12:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:21.345 19:12:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:21.345 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:21.345 [2024-07-25 19:12:21.225933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:21.345 [2024-07-25 19:12:21.227937] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:21.345 [2024-07-25 19:12:21.228106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:21.346 [2024-07-25 19:12:21.228201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:21.346 [2024-07-25 19:12:21.228265] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:21.346 [2024-07-25 19:12:21.228355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:21.346 [2024-07-25 19:12:21.228422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:21.346 [2024-07-25 19:12:21.228508] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:21.346 [2024-07-25 19:12:21.228596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:21.346 [2024-07-25 19:12:21.228670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:21.346 [2024-07-25 19:12:21.228724] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:21.346 [2024-07-25 19:12:21.228802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:21.346 [2024-07-25 19:12:21.228866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:21.346 19:12:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:21.346 19:12:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:21.346 19:12:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:21.346 19:12:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:27.954 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:27.955 19:12:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.955 19:12:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:27.955 19:12:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:27.955 [2024-07-25 19:12:27.626056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:27.955 [2024-07-25 19:12:27.628093] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:27.955 [2024-07-25 19:12:27.628255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:27.955 [2024-07-25 19:12:27.628366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:27.955 [2024-07-25 19:12:27.628422] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:27.955 [2024-07-25 19:12:27.628578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:27.955 [2024-07-25 19:12:27.628661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:27.955 [2024-07-25 19:12:27.628710] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:27.955 [2024-07-25 19:12:27.628794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:27.955 [2024-07-25 19:12:27.628877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:27.955 [2024-07-25 19:12:27.628940] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:27.955 [2024-07-25 19:12:27.629063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:27.955 [2024-07-25 19:12:27.629113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:27.955 19:12:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:27.955 19:12:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:27.955 19:12:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:27.955 19:12:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@717 -- # time=25.74 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@718 -- # echo 25.74 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.74 00:45:34.522 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.74 1 00:45:34.522 remove_attach_helper took 25.74s to complete (handling 1 nvme drive(s)) 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:34.522 19:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:34.523 19:12:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:34.523 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:45:34.523 19:12:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:34.523 19:12:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:34.523 19:12:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:34.523 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:45:34.523 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:45:34.523 19:12:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:45:34.523 19:12:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:45:34.523 19:12:33 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:45:34.523 19:12:34 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:45:34.523 19:12:34 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:45:34.523 19:12:34 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:45:34.523 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:45:34.523 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:45:34.523 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:45:34.523 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:45:34.523 19:12:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:45:39.795 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:39.796 19:12:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:39.796 19:12:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:39.796 19:12:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:39.796 [2024-07-25 19:12:40.096600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:39.796 [2024-07-25 19:12:40.098420] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:39.796 [2024-07-25 19:12:40.098593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:39.796 [2024-07-25 19:12:40.098706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:39.796 [2024-07-25 19:12:40.098802] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:39.796 [2024-07-25 19:12:40.098983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:39.796 [2024-07-25 19:12:40.099038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:39.796 [2024-07-25 19:12:40.099083] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:39.796 [2024-07-25 19:12:40.099237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:39.796 [2024-07-25 19:12:40.099280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:39.796 [2024-07-25 19:12:40.099365] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:39.796 [2024-07-25 19:12:40.099478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:39.796 [2024-07-25 19:12:40.099527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:39.796 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:40.055 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:40.055 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:40.055 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:40.055 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:40.055 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:40.055 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:40.055 19:12:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:40.055 19:12:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:40.055 19:12:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:40.314 19:12:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:46.884 19:12:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:46.884 19:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:46.884 19:12:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:46.884 19:12:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:46.884 19:12:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:46.884 19:12:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:46.884 [2024-07-25 19:12:46.996744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:46.884 [2024-07-25 19:12:46.998520] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:46.884 [2024-07-25 19:12:46.998654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:46.884 [2024-07-25 19:12:46.998838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:46.884 [2024-07-25 19:12:46.998890] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:46.884 [2024-07-25 19:12:46.999000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:46.884 [2024-07-25 19:12:46.999043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:46.884 [2024-07-25 19:12:46.999131] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:46.884 [2024-07-25 19:12:46.999172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:46.884 [2024-07-25 19:12:46.999228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:46.884 [2024-07-25 19:12:46.999347] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:46.884 [2024-07-25 19:12:46.999393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:46.884 [2024-07-25 19:12:46.999467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:46.884 19:12:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:47.143 19:12:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:47.143 19:12:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:47.143 19:12:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:47.143 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:47.403 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:47.403 19:12:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:53.970 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:53.970 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:53.971 19:12:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:53.971 19:12:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:53.971 19:12:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:53.971 19:12:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:53.971 19:12:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:53.971 [2024-07-25 19:12:53.896900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:53.971 [2024-07-25 19:12:53.898683] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:53.971 [2024-07-25 19:12:53.898858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:53.971 [2024-07-25 19:12:53.899014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:53.971 [2024-07-25 19:12:53.899117] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:53.971 [2024-07-25 19:12:53.899172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:53.971 [2024-07-25 19:12:53.899281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:53.971 [2024-07-25 19:12:53.899337] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:53.971 [2024-07-25 19:12:53.899442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:53.971 [2024-07-25 19:12:53.899491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:53.971 [2024-07-25 19:12:53.899540] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:53.971 [2024-07-25 19:12:53.899601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:53.971 [2024-07-25 19:12:53.899677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:53.971 19:12:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:53.971 19:12:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:53.971 19:12:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:53.971 19:12:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:53.971 19:12:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:53.971 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:54.229 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:54.229 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:54.229 19:12:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@717 -- # time=26.71 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@718 -- # echo 26.71 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=26.71 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 26.71 1 00:46:00.795 remove_attach_helper took 26.71s to complete (handling 1 nvme drive(s)) 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:46:00.795 19:13:00 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 172590 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 172590 ']' 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 172590 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 172590 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 172590' 00:46:00.795 killing process with pid 172590 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@969 -- # kill 172590 00:46:00.795 19:13:00 sw_hotplug -- common/autotest_common.sh@974 -- # wait 172590 00:46:03.332 19:13:03 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:03.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:46:03.332 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:04.267 ************************************ 00:46:04.267 END TEST sw_hotplug 00:46:04.267 ************************************ 00:46:04.267 00:46:04.267 real 1m32.971s 00:46:04.267 user 1m5.485s 00:46:04.267 sys 0m18.054s 00:46:04.267 19:13:04 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:04.267 19:13:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:04.267 19:13:04 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:46:04.267 19:13:04 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:46:04.267 19:13:04 -- spdk/autotest.sh@264 -- # timing_exit lib 00:46:04.267 19:13:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:04.267 19:13:04 -- common/autotest_common.sh@10 -- # set +x 00:46:04.526 19:13:04 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:46:04.526 19:13:04 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:46:04.526 19:13:04 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:46:04.526 19:13:04 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:46:04.526 19:13:04 -- spdk/autotest.sh@379 -- # [[ 1 -eq 1 ]] 00:46:04.526 19:13:04 -- spdk/autotest.sh@380 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:46:04.526 19:13:04 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:46:04.526 19:13:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:04.526 19:13:04 -- common/autotest_common.sh@10 -- # set +x 00:46:04.526 ************************************ 00:46:04.526 START TEST blockdev_raid5f 00:46:04.526 ************************************ 00:46:04.526 19:13:04 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:46:04.526 * Looking for test storage... 00:46:04.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:46:04.526 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:46:04.526 19:13:05 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:46:04.526 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:46:04.526 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:04.526 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=173490 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 173490 00:46:04.527 19:13:05 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 173490 ']' 00:46:04.527 19:13:05 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:04.527 19:13:05 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:04.527 19:13:05 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:04.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:04.527 19:13:05 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:46:04.527 19:13:05 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:04.527 19:13:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:04.787 [2024-07-25 19:13:05.115493] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:04.787 [2024-07-25 19:13:05.116369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173490 ] 00:46:04.787 [2024-07-25 19:13:05.301884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:05.046 [2024-07-25 19:13:05.587386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:05.984 Malloc0 00:46:05.984 Malloc1 00:46:05.984 Malloc2 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:46:05.984 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:05.984 19:13:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:06.243 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:46:06.243 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:46:06.243 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e5f65f50-d51c-4d49-915b-849fc4e06419"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e5f65f50-d51c-4d49-915b-849fc4e06419",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e5f65f50-d51c-4d49-915b-849fc4e06419",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c5945db0-2162-4938-bcdb-f548d72f4c23",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2aebfcdf-1af1-4195-a079-bbc7c7aac33f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "11970e53-9eb5-4161-bacd-d7c3ad237a24",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:46:06.243 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:46:06.243 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:46:06.243 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:46:06.243 19:13:06 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 173490 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 173490 ']' 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 173490 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173490 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173490' 00:46:06.243 killing process with pid 173490 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 173490 00:46:06.243 19:13:06 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 173490 00:46:09.531 19:13:09 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:46:09.532 19:13:09 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:46:09.532 19:13:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:46:09.532 19:13:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:09.532 19:13:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:09.532 ************************************ 00:46:09.532 START TEST bdev_hello_world 00:46:09.532 ************************************ 00:46:09.532 19:13:09 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:46:09.532 [2024-07-25 19:13:09.526967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:09.532 [2024-07-25 19:13:09.527178] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173562 ] 00:46:09.532 [2024-07-25 19:13:09.709746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:09.532 [2024-07-25 19:13:09.900603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:10.100 [2024-07-25 19:13:10.448963] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:46:10.100 [2024-07-25 19:13:10.449222] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:46:10.100 [2024-07-25 19:13:10.449309] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:46:10.100 [2024-07-25 19:13:10.449935] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:46:10.100 [2024-07-25 19:13:10.450195] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:46:10.100 [2024-07-25 19:13:10.450323] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:46:10.100 [2024-07-25 19:13:10.450431] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:46:10.100 00:46:10.100 [2024-07-25 19:13:10.450728] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:46:12.007 00:46:12.007 real 0m2.685s 00:46:12.007 user 0m2.275s 00:46:12.007 sys 0m0.292s 00:46:12.007 19:13:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:12.007 19:13:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:46:12.007 ************************************ 00:46:12.007 END TEST bdev_hello_world 00:46:12.007 ************************************ 00:46:12.007 19:13:12 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:46:12.007 19:13:12 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:46:12.007 19:13:12 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:12.007 19:13:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:12.007 ************************************ 00:46:12.007 START TEST bdev_bounds 00:46:12.007 ************************************ 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=173619 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:46:12.007 Process bdevio pid: 173619 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 173619' 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 173619 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 173619 ']' 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:12.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:46:12.007 19:13:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:46:12.007 [2024-07-25 19:13:12.251552] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:12.007 [2024-07-25 19:13:12.251870] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173619 ] 00:46:12.007 [2024-07-25 19:13:12.419642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:12.267 [2024-07-25 19:13:12.617701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:12.267 [2024-07-25 19:13:12.617880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:46:12.267 [2024-07-25 19:13:12.618067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:12.835 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:12.835 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:46:12.835 19:13:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:46:12.835 I/O targets: 00:46:12.835 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:46:12.835 00:46:12.835 00:46:12.835 CUnit - A unit testing framework for C - Version 2.1-3 00:46:12.835 http://cunit.sourceforge.net/ 00:46:12.835 00:46:12.835 00:46:12.835 Suite: bdevio tests on: raid5f 00:46:12.835 Test: blockdev write read block ...passed 00:46:12.835 Test: blockdev write zeroes read block ...passed 00:46:12.835 Test: blockdev write zeroes read no split ...passed 00:46:12.835 Test: blockdev write zeroes read split ...passed 00:46:13.093 Test: blockdev write zeroes read split partial ...passed 00:46:13.093 Test: blockdev reset ...passed 00:46:13.093 Test: blockdev write read 8 blocks ...passed 00:46:13.093 Test: blockdev write read size > 128k ...passed 00:46:13.093 Test: blockdev write read invalid size ...passed 00:46:13.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:46:13.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:46:13.093 Test: blockdev write read max offset ...passed 00:46:13.093 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:46:13.093 Test: blockdev writev readv 8 blocks ...passed 00:46:13.093 Test: blockdev writev readv 30 x 1block ...passed 00:46:13.093 Test: blockdev writev readv block ...passed 00:46:13.093 Test: blockdev writev readv size > 128k ...passed 00:46:13.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:46:13.093 Test: blockdev comparev and writev ...passed 00:46:13.093 Test: blockdev nvme passthru rw ...passed 00:46:13.093 Test: blockdev nvme passthru vendor specific ...passed 00:46:13.093 Test: blockdev nvme admin passthru ...passed 00:46:13.093 Test: blockdev copy ...passed 00:46:13.093 00:46:13.093 Run Summary: Type Total Ran Passed Failed Inactive 00:46:13.093 suites 1 1 n/a 0 0 00:46:13.093 tests 23 23 23 0 0 00:46:13.093 asserts 130 130 130 0 n/a 00:46:13.093 00:46:13.093 Elapsed time = 0.519 seconds 00:46:13.093 0 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 173619 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 173619 ']' 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 173619 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173619 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173619' 00:46:13.093 killing process with pid 173619 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 173619 00:46:13.093 19:13:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 173619 00:46:14.520 19:13:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:46:14.520 00:46:14.520 real 0m2.823s 00:46:14.520 user 0m6.724s 00:46:14.520 sys 0m0.351s 00:46:14.520 19:13:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:14.520 19:13:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:46:14.520 ************************************ 00:46:14.520 END TEST bdev_bounds 00:46:14.520 ************************************ 00:46:14.520 19:13:15 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:46:14.520 19:13:15 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:46:14.520 19:13:15 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:14.520 19:13:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:14.520 ************************************ 00:46:14.520 START TEST bdev_nbd 00:46:14.520 ************************************ 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:46:14.520 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=173681 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 173681 /var/tmp/spdk-nbd.sock 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 173681 ']' 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:46:14.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:14.521 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:46:14.779 [2024-07-25 19:13:15.173858] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:14.779 [2024-07-25 19:13:15.174065] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:14.779 [2024-07-25 19:13:15.353663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:15.037 [2024-07-25 19:13:15.511417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:15.605 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:46:15.606 19:13:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:15.865 1+0 records in 00:46:15.865 1+0 records out 00:46:15.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033127 s, 12.4 MB/s 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:46:15.865 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:46:16.123 { 00:46:16.123 "nbd_device": "/dev/nbd0", 00:46:16.123 "bdev_name": "raid5f" 00:46:16.123 } 00:46:16.123 ]' 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:46:16.123 { 00:46:16.123 "nbd_device": "/dev/nbd0", 00:46:16.123 "bdev_name": "raid5f" 00:46:16.123 } 00:46:16.123 ]' 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:16.123 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:16.381 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:16.639 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:46:16.639 19:13:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:46:16.639 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:46:16.640 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:46:16.898 /dev/nbd0 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:16.898 1+0 records in 00:46:16.898 1+0 records out 00:46:16.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222487 s, 18.4 MB/s 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:16.898 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:46:17.157 { 00:46:17.157 "nbd_device": "/dev/nbd0", 00:46:17.157 "bdev_name": "raid5f" 00:46:17.157 } 00:46:17.157 ]' 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:46:17.157 { 00:46:17.157 "nbd_device": "/dev/nbd0", 00:46:17.157 "bdev_name": "raid5f" 00:46:17.157 } 00:46:17.157 ]' 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:46:17.157 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:46:17.158 256+0 records in 00:46:17.158 256+0 records out 00:46:17.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116997 s, 89.6 MB/s 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:46:17.158 256+0 records in 00:46:17.158 256+0 records out 00:46:17.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304589 s, 34.4 MB/s 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:17.158 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:17.416 19:13:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:46:17.674 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:46:17.933 malloc_lvol_verify 00:46:17.933 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:46:18.192 7f111404-489f-4ac5-8975-40fa05217555 00:46:18.192 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:46:18.450 00e918ac-d3d6-45c5-9abd-f5e3f5a5eba0 00:46:18.450 19:13:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:46:18.709 /dev/nbd0 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:46:18.709 mke2fs 1.46.5 (30-Dec-2021) 00:46:18.709 00:46:18.709 Filesystem too small for a journal 00:46:18.709 Discarding device blocks: 0/1024 done 00:46:18.709 Creating filesystem with 1024 4k blocks and 1024 inodes 00:46:18.709 00:46:18.709 Allocating group tables: 0/1 done 00:46:18.709 Writing inode tables: 0/1 done 00:46:18.709 Writing superblocks and filesystem accounting information: 0/1 done 00:46:18.709 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:18.709 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 173681 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 173681 ']' 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 173681 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173681 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:18.968 killing process with pid 173681 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173681' 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 173681 00:46:18.968 19:13:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 173681 00:46:20.346 19:13:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:46:20.346 00:46:20.346 real 0m5.822s 00:46:20.346 user 0m7.735s 00:46:20.346 sys 0m1.501s 00:46:20.346 19:13:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:20.346 19:13:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:46:20.346 ************************************ 00:46:20.346 END TEST bdev_nbd 00:46:20.346 ************************************ 00:46:20.606 19:13:20 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:46:20.606 19:13:20 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:46:20.606 19:13:20 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:46:20.606 19:13:20 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:46:20.606 19:13:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:46:20.606 19:13:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:20.606 19:13:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:20.606 ************************************ 00:46:20.606 START TEST bdev_fio 00:46:20.606 ************************************ 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:46:20.606 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:46:20.606 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:46:20.607 19:13:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:46:20.607 ************************************ 00:46:20.607 START TEST bdev_fio_rw_verify 00:46:20.607 ************************************ 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:20.607 19:13:21 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:46:20.866 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:46:20.866 fio-3.35 00:46:20.866 Starting 1 thread 00:46:33.078 00:46:33.078 job_raid5f: (groupid=0, jobs=1): err= 0: pid=173922: Thu Jul 25 19:13:32 2024 00:46:33.078 read: IOPS=12.5k, BW=49.0MiB/s (51.4MB/s)(490MiB/10001msec) 00:46:33.078 slat (usec): min=17, max=284, avg=19.12, stdev= 2.36 00:46:33.078 clat (usec): min=10, max=479, avg=130.32, stdev=46.09 00:46:33.078 lat (usec): min=29, max=498, avg=149.44, stdev=46.49 00:46:33.078 clat percentiles (usec): 00:46:33.078 | 50.000th=[ 135], 99.000th=[ 221], 99.900th=[ 326], 99.990th=[ 347], 00:46:33.078 | 99.999th=[ 461] 00:46:33.078 write: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(506MiB/9874msec); 0 zone resets 00:46:33.078 slat (usec): min=7, max=279, avg=15.74, stdev= 2.85 00:46:33.078 clat (usec): min=57, max=1176, avg=292.54, stdev=39.93 00:46:33.078 lat (usec): min=72, max=1450, avg=308.29, stdev=40.92 00:46:33.078 clat percentiles (usec): 00:46:33.078 | 50.000th=[ 297], 99.000th=[ 408], 99.900th=[ 537], 99.990th=[ 840], 00:46:33.078 | 99.999th=[ 1172] 00:46:33.078 bw ( KiB/s): min=47960, max=54928, per=98.71%, avg=51794.11, stdev=2225.23, samples=19 00:46:33.078 iops : min=11990, max=13732, avg=12948.53, stdev=556.31, samples=19 00:46:33.078 lat (usec) : 20=0.01%, 50=0.01%, 100=16.31%, 250=39.74%, 500=43.79% 00:46:33.078 lat (usec) : 750=0.15%, 1000=0.01% 00:46:33.078 lat (msec) : 2=0.01% 00:46:33.078 cpu : usr=99.68%, sys=0.27%, ctx=114, majf=0, minf=8895 00:46:33.078 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:33.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.078 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:33.078 issued rwts: total=125451,129522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:33.078 latency : target=0, window=0, percentile=100.00%, depth=8 00:46:33.078 00:46:33.078 Run status group 0 (all jobs): 00:46:33.078 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=490MiB (514MB), run=10001-10001msec 00:46:33.078 WRITE: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=506MiB (531MB), run=9874-9874msec 00:46:33.645 ----------------------------------------------------- 00:46:33.645 Suppressions used: 00:46:33.645 count bytes template 00:46:33.645 1 7 /usr/src/fio/parse.c 00:46:33.645 185 17760 /usr/src/fio/iolog.c 00:46:33.646 1 904 libcrypto.so 00:46:33.646 ----------------------------------------------------- 00:46:33.646 00:46:33.646 00:46:33.646 real 0m13.012s 00:46:33.646 user 0m13.744s 00:46:33.646 sys 0m0.866s 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:46:33.646 ************************************ 00:46:33.646 END TEST bdev_fio_rw_verify 00:46:33.646 ************************************ 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e5f65f50-d51c-4d49-915b-849fc4e06419"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e5f65f50-d51c-4d49-915b-849fc4e06419",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e5f65f50-d51c-4d49-915b-849fc4e06419",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c5945db0-2162-4938-bcdb-f548d72f4c23",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2aebfcdf-1af1-4195-a079-bbc7c7aac33f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "11970e53-9eb5-4161-bacd-d7c3ad237a24",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:46:33.646 /home/vagrant/spdk_repo/spdk 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:46:33.646 00:46:33.646 real 0m13.232s 00:46:33.646 user 0m13.876s 00:46:33.646 sys 0m0.961s 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:33.646 19:13:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:46:33.646 ************************************ 00:46:33.646 END TEST bdev_fio 00:46:33.646 ************************************ 00:46:33.905 19:13:34 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:46:33.905 19:13:34 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:46:33.905 19:13:34 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:46:33.905 19:13:34 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:33.905 19:13:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:33.905 ************************************ 00:46:33.905 START TEST bdev_verify 00:46:33.905 ************************************ 00:46:33.905 19:13:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:46:33.905 [2024-07-25 19:13:34.395206] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:33.905 [2024-07-25 19:13:34.395450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174092 ] 00:46:34.164 [2024-07-25 19:13:34.590221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:34.423 [2024-07-25 19:13:34.902059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:34.423 [2024-07-25 19:13:34.902059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:34.990 Running I/O for 5 seconds... 00:46:40.261 00:46:40.261 Latency(us) 00:46:40.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:40.261 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:46:40.261 Verification LBA range: start 0x0 length 0x2000 00:46:40.261 raid5f : 5.02 5674.40 22.17 0.00 0.00 33879.42 208.70 29959.31 00:46:40.261 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:40.261 Verification LBA range: start 0x2000 length 0x2000 00:46:40.261 raid5f : 5.02 6287.08 24.56 0.00 0.00 30613.09 213.58 27587.54 00:46:40.261 =================================================================================================================== 00:46:40.261 Total : 11961.48 46.72 0.00 0.00 32163.25 208.70 29959.31 00:46:42.174 00:46:42.174 real 0m8.009s 00:46:42.174 user 0m14.387s 00:46:42.174 sys 0m0.401s 00:46:42.174 19:13:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:42.174 19:13:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:46:42.174 ************************************ 00:46:42.174 END TEST bdev_verify 00:46:42.174 ************************************ 00:46:42.174 19:13:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:46:42.174 19:13:42 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:46:42.174 19:13:42 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:42.174 19:13:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:42.174 ************************************ 00:46:42.174 START TEST bdev_verify_big_io 00:46:42.174 ************************************ 00:46:42.174 19:13:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:46:42.174 [2024-07-25 19:13:42.466013] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:42.174 [2024-07-25 19:13:42.466230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174208 ] 00:46:42.174 [2024-07-25 19:13:42.649710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:42.431 [2024-07-25 19:13:42.899731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:46:42.431 [2024-07-25 19:13:42.899736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:42.997 Running I/O for 5 seconds... 00:46:49.563 00:46:49.563 Latency(us) 00:46:49.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:49.563 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:46:49.563 Verification LBA range: start 0x0 length 0x200 00:46:49.563 raid5f : 5.32 357.83 22.36 0.00 0.00 8710080.85 176.52 465368.02 00:46:49.563 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:46:49.563 Verification LBA range: start 0x200 length 0x200 00:46:49.563 raid5f : 5.31 382.81 23.93 0.00 0.00 8115061.48 296.47 445395.14 00:46:49.563 =================================================================================================================== 00:46:49.563 Total : 740.65 46.29 0.00 0.00 8402974.08 176.52 465368.02 00:46:50.131 00:46:50.131 real 0m8.289s 00:46:50.131 user 0m14.990s 00:46:50.131 sys 0m0.440s 00:46:50.131 19:13:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:50.131 19:13:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:46:50.131 ************************************ 00:46:50.131 END TEST bdev_verify_big_io 00:46:50.131 ************************************ 00:46:50.391 19:13:50 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:50.391 19:13:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:46:50.391 19:13:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:50.391 19:13:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:50.391 ************************************ 00:46:50.391 START TEST bdev_write_zeroes 00:46:50.391 ************************************ 00:46:50.391 19:13:50 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:50.391 [2024-07-25 19:13:50.823020] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:50.391 [2024-07-25 19:13:50.823242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174324 ] 00:46:50.650 [2024-07-25 19:13:51.003001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:50.909 [2024-07-25 19:13:51.245328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:51.478 Running I/O for 1 seconds... 00:46:52.416 00:46:52.416 Latency(us) 00:46:52.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:52.416 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:52.416 raid5f : 1.00 29374.59 114.74 0.00 0.00 4343.60 1295.12 4837.18 00:46:52.416 =================================================================================================================== 00:46:52.416 Total : 29374.59 114.74 0.00 0.00 4343.60 1295.12 4837.18 00:46:54.338 00:46:54.338 real 0m3.916s 00:46:54.338 user 0m3.397s 00:46:54.338 sys 0m0.400s 00:46:54.338 19:13:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:54.338 19:13:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:46:54.338 ************************************ 00:46:54.338 END TEST bdev_write_zeroes 00:46:54.338 ************************************ 00:46:54.339 19:13:54 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:54.339 19:13:54 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:46:54.339 19:13:54 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:54.339 19:13:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:54.339 ************************************ 00:46:54.339 START TEST bdev_json_nonenclosed 00:46:54.339 ************************************ 00:46:54.339 19:13:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:54.339 [2024-07-25 19:13:54.813123] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:54.339 [2024-07-25 19:13:54.813339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174393 ] 00:46:54.599 [2024-07-25 19:13:54.998007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:54.915 [2024-07-25 19:13:55.235922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:54.915 [2024-07-25 19:13:55.236284] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:46:54.915 [2024-07-25 19:13:55.236430] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:54.915 [2024-07-25 19:13:55.236545] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:55.201 00:46:55.201 real 0m1.004s 00:46:55.201 user 0m0.699s 00:46:55.201 sys 0m0.205s 00:46:55.201 19:13:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:55.201 19:13:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:46:55.201 ************************************ 00:46:55.201 END TEST bdev_json_nonenclosed 00:46:55.201 ************************************ 00:46:55.459 19:13:55 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:55.459 19:13:55 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:46:55.459 19:13:55 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:55.459 19:13:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:55.459 ************************************ 00:46:55.459 START TEST bdev_json_nonarray 00:46:55.459 ************************************ 00:46:55.459 19:13:55 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:55.459 [2024-07-25 19:13:55.895320] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:46:55.459 [2024-07-25 19:13:55.895552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174425 ] 00:46:55.718 [2024-07-25 19:13:56.079374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:55.976 [2024-07-25 19:13:56.332336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:46:55.976 [2024-07-25 19:13:56.332724] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:46:55.976 [2024-07-25 19:13:56.332886] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:55.976 [2024-07-25 19:13:56.332949] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:56.544 00:46:56.544 real 0m1.025s 00:46:56.544 user 0m0.732s 00:46:56.544 sys 0m0.192s 00:46:56.544 19:13:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:56.544 19:13:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:46:56.544 ************************************ 00:46:56.544 END TEST bdev_json_nonarray 00:46:56.544 ************************************ 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:46:56.544 19:13:56 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:46:56.544 00:46:56.544 real 0m52.000s 00:46:56.544 user 1m9.490s 00:46:56.544 sys 0m5.688s 00:46:56.544 19:13:56 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:56.544 19:13:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:46:56.544 ************************************ 00:46:56.544 END TEST blockdev_raid5f 00:46:56.544 ************************************ 00:46:56.544 19:13:56 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:46:56.544 19:13:56 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:46:56.544 19:13:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:56.544 19:13:56 -- common/autotest_common.sh@10 -- # set +x 00:46:56.544 19:13:56 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:46:56.544 19:13:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:46:56.544 19:13:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:46:56.544 19:13:56 -- common/autotest_common.sh@10 -- # set +x 00:46:59.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:46:59.077 Waiting for block devices as requested 00:46:59.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:59.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:46:59.337 Cleaning 00:46:59.337 Removing: /var/run/dpdk/spdk0/config 00:46:59.337 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:46:59.337 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:46:59.337 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:46:59.337 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:46:59.337 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:46:59.337 Removing: /var/run/dpdk/spdk0/hugepage_info 00:46:59.337 Removing: /dev/shm/spdk_tgt_trace.pid111945 00:46:59.337 Removing: /var/run/dpdk/spdk0 00:46:59.337 Removing: /var/run/dpdk/spdk_pid111670 00:46:59.337 Removing: /var/run/dpdk/spdk_pid111945 00:46:59.337 Removing: /var/run/dpdk/spdk_pid112199 00:46:59.337 Removing: /var/run/dpdk/spdk_pid112323 00:46:59.337 Removing: /var/run/dpdk/spdk_pid112389 00:46:59.337 Removing: /var/run/dpdk/spdk_pid112543 00:46:59.337 Removing: /var/run/dpdk/spdk_pid112567 00:46:59.337 Removing: /var/run/dpdk/spdk_pid112735 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113007 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113193 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113314 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113428 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113563 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113677 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113730 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113784 00:46:59.337 Removing: /var/run/dpdk/spdk_pid113862 00:46:59.337 Removing: /var/run/dpdk/spdk_pid114005 00:46:59.596 Removing: /var/run/dpdk/spdk_pid114544 00:46:59.596 Removing: /var/run/dpdk/spdk_pid114636 00:46:59.596 Removing: /var/run/dpdk/spdk_pid114723 00:46:59.596 Removing: /var/run/dpdk/spdk_pid114751 00:46:59.596 Removing: /var/run/dpdk/spdk_pid114925 00:46:59.596 Removing: /var/run/dpdk/spdk_pid114946 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115128 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115160 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115239 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115267 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115343 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115366 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115581 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115631 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115682 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115768 00:46:59.596 Removing: /var/run/dpdk/spdk_pid115963 00:46:59.596 Removing: /var/run/dpdk/spdk_pid116075 00:46:59.596 Removing: /var/run/dpdk/spdk_pid116149 00:46:59.596 Removing: /var/run/dpdk/spdk_pid117403 00:46:59.596 Removing: /var/run/dpdk/spdk_pid117632 00:46:59.596 Removing: /var/run/dpdk/spdk_pid117847 00:46:59.596 Removing: /var/run/dpdk/spdk_pid117986 00:46:59.596 Removing: /var/run/dpdk/spdk_pid118149 00:46:59.596 Removing: /var/run/dpdk/spdk_pid118231 00:46:59.596 Removing: /var/run/dpdk/spdk_pid118269 00:46:59.596 Removing: /var/run/dpdk/spdk_pid118307 00:46:59.596 Removing: /var/run/dpdk/spdk_pid118789 00:46:59.596 Removing: /var/run/dpdk/spdk_pid118897 00:46:59.596 Removing: /var/run/dpdk/spdk_pid119019 00:46:59.596 Removing: /var/run/dpdk/spdk_pid119095 00:46:59.596 Removing: /var/run/dpdk/spdk_pid120821 00:46:59.596 Removing: /var/run/dpdk/spdk_pid121191 00:46:59.596 Removing: /var/run/dpdk/spdk_pid121389 00:46:59.596 Removing: /var/run/dpdk/spdk_pid122325 00:46:59.596 Removing: /var/run/dpdk/spdk_pid122697 00:46:59.596 Removing: /var/run/dpdk/spdk_pid122889 00:46:59.596 Removing: /var/run/dpdk/spdk_pid123826 00:46:59.596 Removing: /var/run/dpdk/spdk_pid124353 00:46:59.596 Removing: /var/run/dpdk/spdk_pid124554 00:46:59.596 Removing: /var/run/dpdk/spdk_pid126674 00:46:59.596 Removing: /var/run/dpdk/spdk_pid127148 00:46:59.596 Removing: /var/run/dpdk/spdk_pid127355 00:46:59.596 Removing: /var/run/dpdk/spdk_pid129500 00:46:59.596 Removing: /var/run/dpdk/spdk_pid130213 00:46:59.596 Removing: /var/run/dpdk/spdk_pid130420 00:46:59.596 Removing: /var/run/dpdk/spdk_pid132557 00:46:59.596 Removing: /var/run/dpdk/spdk_pid133288 00:46:59.596 Removing: /var/run/dpdk/spdk_pid133493 00:46:59.596 Removing: /var/run/dpdk/spdk_pid135855 00:46:59.596 Removing: /var/run/dpdk/spdk_pid136397 00:46:59.596 Removing: /var/run/dpdk/spdk_pid136619 00:46:59.596 Removing: /var/run/dpdk/spdk_pid138997 00:46:59.596 Removing: /var/run/dpdk/spdk_pid139532 00:46:59.596 Removing: /var/run/dpdk/spdk_pid139755 00:46:59.596 Removing: /var/run/dpdk/spdk_pid142112 00:46:59.596 Removing: /var/run/dpdk/spdk_pid142949 00:46:59.596 Removing: /var/run/dpdk/spdk_pid143169 00:46:59.596 Removing: /var/run/dpdk/spdk_pid143367 00:46:59.596 Removing: /var/run/dpdk/spdk_pid143907 00:46:59.596 Removing: /var/run/dpdk/spdk_pid144842 00:46:59.855 Removing: /var/run/dpdk/spdk_pid145333 00:46:59.855 Removing: /var/run/dpdk/spdk_pid146210 00:46:59.855 Removing: /var/run/dpdk/spdk_pid146772 00:46:59.855 Removing: /var/run/dpdk/spdk_pid147713 00:46:59.855 Removing: /var/run/dpdk/spdk_pid148243 00:46:59.855 Removing: /var/run/dpdk/spdk_pid151058 00:46:59.855 Removing: /var/run/dpdk/spdk_pid151768 00:46:59.855 Removing: /var/run/dpdk/spdk_pid152318 00:46:59.855 Removing: /var/run/dpdk/spdk_pid155381 00:46:59.855 Removing: /var/run/dpdk/spdk_pid156211 00:46:59.855 Removing: /var/run/dpdk/spdk_pid156836 00:46:59.855 Removing: /var/run/dpdk/spdk_pid158211 00:46:59.855 Removing: /var/run/dpdk/spdk_pid158722 00:46:59.855 Removing: /var/run/dpdk/spdk_pid159962 00:46:59.855 Removing: /var/run/dpdk/spdk_pid160470 00:46:59.855 Removing: /var/run/dpdk/spdk_pid161721 00:46:59.855 Removing: /var/run/dpdk/spdk_pid162229 00:46:59.855 Removing: /var/run/dpdk/spdk_pid163073 00:46:59.855 Removing: /var/run/dpdk/spdk_pid163129 00:46:59.855 Removing: /var/run/dpdk/spdk_pid163192 00:46:59.855 Removing: /var/run/dpdk/spdk_pid163261 00:46:59.855 Removing: /var/run/dpdk/spdk_pid163408 00:46:59.855 Removing: /var/run/dpdk/spdk_pid163564 00:46:59.855 Removing: /var/run/dpdk/spdk_pid163789 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164095 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164110 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164168 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164204 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164245 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164277 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164305 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164345 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164379 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164411 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164447 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164478 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164517 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164552 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164583 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164611 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164643 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164673 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164705 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164737 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164792 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164827 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164872 00:46:59.855 Removing: /var/run/dpdk/spdk_pid164955 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165009 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165036 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165081 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165117 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165143 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165215 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165246 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165293 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165325 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165357 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165378 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165407 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165434 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165463 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165487 00:46:59.855 Removing: /var/run/dpdk/spdk_pid165537 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165587 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165620 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165665 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165698 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165720 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165787 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165817 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165864 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165893 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165917 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165948 00:47:00.114 Removing: /var/run/dpdk/spdk_pid165972 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166001 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166025 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166054 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166155 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166249 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166406 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166441 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166502 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166562 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166599 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166636 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166672 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166721 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166754 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166845 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166910 00:47:00.114 Removing: /var/run/dpdk/spdk_pid166971 00:47:00.114 Removing: /var/run/dpdk/spdk_pid167245 00:47:00.114 Removing: /var/run/dpdk/spdk_pid167383 00:47:00.114 Removing: /var/run/dpdk/spdk_pid167432 00:47:00.114 Removing: /var/run/dpdk/spdk_pid167530 00:47:00.114 Removing: /var/run/dpdk/spdk_pid167620 00:47:00.114 Removing: /var/run/dpdk/spdk_pid167664 00:47:00.114 Removing: /var/run/dpdk/spdk_pid167922 00:47:00.114 Removing: /var/run/dpdk/spdk_pid168038 00:47:00.114 Removing: /var/run/dpdk/spdk_pid168140 00:47:00.114 Removing: /var/run/dpdk/spdk_pid168209 00:47:00.114 Removing: /var/run/dpdk/spdk_pid168248 00:47:00.114 Removing: /var/run/dpdk/spdk_pid168333 00:47:00.114 Removing: /var/run/dpdk/spdk_pid168780 00:47:00.114 Removing: /var/run/dpdk/spdk_pid168831 00:47:00.114 Removing: /var/run/dpdk/spdk_pid169150 00:47:00.114 Removing: /var/run/dpdk/spdk_pid169256 00:47:00.114 Removing: /var/run/dpdk/spdk_pid169373 00:47:00.114 Removing: /var/run/dpdk/spdk_pid169436 00:47:00.115 Removing: /var/run/dpdk/spdk_pid169475 00:47:00.115 Removing: /var/run/dpdk/spdk_pid169507 00:47:00.115 Removing: /var/run/dpdk/spdk_pid170863 00:47:00.115 Removing: /var/run/dpdk/spdk_pid171008 00:47:00.115 Removing: /var/run/dpdk/spdk_pid171022 00:47:00.115 Removing: /var/run/dpdk/spdk_pid171039 00:47:00.115 Removing: /var/run/dpdk/spdk_pid171534 00:47:00.115 Removing: /var/run/dpdk/spdk_pid171649 00:47:00.115 Removing: /var/run/dpdk/spdk_pid172590 00:47:00.115 Removing: /var/run/dpdk/spdk_pid173490 00:47:00.115 Removing: /var/run/dpdk/spdk_pid173562 00:47:00.115 Removing: /var/run/dpdk/spdk_pid173619 00:47:00.115 Removing: /var/run/dpdk/spdk_pid173902 00:47:00.115 Removing: /var/run/dpdk/spdk_pid174092 00:47:00.115 Removing: /var/run/dpdk/spdk_pid174208 00:47:00.115 Removing: /var/run/dpdk/spdk_pid174324 00:47:00.374 Removing: /var/run/dpdk/spdk_pid174393 00:47:00.374 Removing: /var/run/dpdk/spdk_pid174425 00:47:00.374 Clean 00:47:00.374 19:14:00 -- common/autotest_common.sh@1451 -- # return 0 00:47:00.374 19:14:00 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:47:00.374 19:14:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:00.374 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:47:00.374 19:14:00 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:47:00.374 19:14:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:00.374 19:14:00 -- common/autotest_common.sh@10 -- # set +x 00:47:00.634 19:14:00 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:47:00.634 19:14:00 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:47:00.634 19:14:00 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:47:00.634 19:14:00 -- spdk/autotest.sh@395 -- # hash lcov 00:47:00.634 19:14:00 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:47:00.634 19:14:00 -- spdk/autotest.sh@397 -- # hostname 00:47:00.634 19:14:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:47:00.634 geninfo: WARNING: invalid characters removed from testname! 00:47:47.311 19:14:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:47.311 19:14:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:47.877 19:14:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:51.164 19:14:51 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:53.699 19:14:53 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:56.231 19:14:56 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:59.516 19:14:59 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:59.516 19:14:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:59.516 19:14:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:59.516 19:14:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:59.516 19:14:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:59.516 19:14:59 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:59.516 19:14:59 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:59.516 19:14:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:59.516 19:14:59 -- paths/export.sh@5 -- $ export PATH 00:47:59.516 19:14:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:59.516 19:14:59 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:47:59.516 19:14:59 -- common/autobuild_common.sh@447 -- $ date +%s 00:47:59.516 19:14:59 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721934899.XXXXXX 00:47:59.516 19:14:59 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721934899.VHJXAF 00:47:59.516 19:14:59 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:47:59.516 19:14:59 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:47:59.516 19:14:59 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:47:59.516 19:14:59 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:47:59.516 19:14:59 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:47:59.516 19:14:59 -- common/autobuild_common.sh@463 -- $ get_config_params 00:47:59.516 19:14:59 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:47:59.516 19:14:59 -- common/autotest_common.sh@10 -- $ set +x 00:47:59.516 19:14:59 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:47:59.516 19:14:59 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:47:59.516 19:14:59 -- pm/common@17 -- $ local monitor 00:47:59.516 19:14:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:59.516 19:14:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:59.516 19:14:59 -- pm/common@25 -- $ sleep 1 00:47:59.516 19:14:59 -- pm/common@21 -- $ date +%s 00:47:59.516 19:14:59 -- pm/common@21 -- $ date +%s 00:47:59.516 19:14:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721934899 00:47:59.516 19:14:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721934899 00:47:59.516 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721934899_collect-vmstat.pm.log 00:47:59.516 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721934899_collect-cpu-load.pm.log 00:48:00.084 19:15:00 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:48:00.084 19:15:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:48:00.084 19:15:00 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:48:00.084 19:15:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:48:00.084 19:15:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:48:00.084 19:15:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:48:00.084 19:15:00 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:48:00.084 19:15:00 -- common/autotest_common.sh@724 -- $ xtrace_disable 00:48:00.084 19:15:00 -- common/autotest_common.sh@10 -- $ set +x 00:48:00.084 19:15:00 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:48:00.084 19:15:00 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:48:00.084 19:15:00 -- spdk/autopackage.sh@40 -- $ get_config_params 00:48:00.084 19:15:00 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:48:00.084 19:15:00 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:48:00.084 19:15:00 -- common/autotest_common.sh@10 -- $ set +x 00:48:00.084 19:15:00 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:48:00.084 19:15:00 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto --disable-unit-tests 00:48:00.084 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:48:00.084 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:48:00.652 Using 'verbs' RDMA provider 00:48:16.481 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:48:28.759 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:48:28.759 Creating mk/config.mk...done. 00:48:28.759 Creating mk/cc.flags.mk...done. 00:48:28.759 Type 'make' to build. 00:48:28.759 19:15:28 -- spdk/autopackage.sh@43 -- $ make -j10 00:48:28.759 make[1]: Nothing to be done for 'all'. 00:48:34.041 The Meson build system 00:48:34.041 Version: 1.4.0 00:48:34.041 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:48:34.041 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:48:34.041 Build type: native build 00:48:34.041 Program cat found: YES (/usr/bin/cat) 00:48:34.041 Project name: DPDK 00:48:34.041 Project version: 24.03.0 00:48:34.041 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:48:34.041 C linker for the host machine: cc ld.bfd 2.38 00:48:34.041 Host machine cpu family: x86_64 00:48:34.041 Host machine cpu: x86_64 00:48:34.041 Message: ## Building in Developer Mode ## 00:48:34.041 Program pkg-config found: YES (/usr/bin/pkg-config) 00:48:34.041 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:48:34.041 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:48:34.041 Program python3 found: YES (/usr/bin/python3) 00:48:34.041 Program cat found: YES (/usr/bin/cat) 00:48:34.041 Compiler for C supports arguments -march=native: YES 00:48:34.041 Checking for size of "void *" : 8 00:48:34.041 Checking for size of "void *" : 8 (cached) 00:48:34.041 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:48:34.041 Library m found: YES 00:48:34.041 Library numa found: YES 00:48:34.041 Has header "numaif.h" : YES 00:48:34.041 Library fdt found: NO 00:48:34.041 Library execinfo found: NO 00:48:34.041 Has header "execinfo.h" : YES 00:48:34.041 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:48:34.041 Run-time dependency libarchive found: NO (tried pkgconfig) 00:48:34.041 Run-time dependency libbsd found: NO (tried pkgconfig) 00:48:34.041 Run-time dependency jansson found: NO (tried pkgconfig) 00:48:34.041 Run-time dependency openssl found: YES 3.0.2 00:48:34.041 Run-time dependency libpcap found: NO (tried pkgconfig) 00:48:34.041 Library pcap found: NO 00:48:34.041 Compiler for C supports arguments -Wcast-qual: YES 00:48:34.041 Compiler for C supports arguments -Wdeprecated: YES 00:48:34.041 Compiler for C supports arguments -Wformat: YES 00:48:34.041 Compiler for C supports arguments -Wformat-nonliteral: YES 00:48:34.041 Compiler for C supports arguments -Wformat-security: YES 00:48:34.041 Compiler for C supports arguments -Wmissing-declarations: YES 00:48:34.041 Compiler for C supports arguments -Wmissing-prototypes: YES 00:48:34.041 Compiler for C supports arguments -Wnested-externs: YES 00:48:34.041 Compiler for C supports arguments -Wold-style-definition: YES 00:48:34.041 Compiler for C supports arguments -Wpointer-arith: YES 00:48:34.041 Compiler for C supports arguments -Wsign-compare: YES 00:48:34.041 Compiler for C supports arguments -Wstrict-prototypes: YES 00:48:34.041 Compiler for C supports arguments -Wundef: YES 00:48:34.041 Compiler for C supports arguments -Wwrite-strings: YES 00:48:34.041 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:48:34.041 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:48:34.041 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:48:34.041 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:48:34.041 Program objdump found: YES (/usr/bin/objdump) 00:48:34.041 Compiler for C supports arguments -mavx512f: YES 00:48:34.041 Checking if "AVX512 checking" compiles: YES 00:48:34.041 Fetching value of define "__SSE4_2__" : 1 00:48:34.041 Fetching value of define "__AES__" : 1 00:48:34.041 Fetching value of define "__AVX__" : 1 00:48:34.041 Fetching value of define "__AVX2__" : 1 00:48:34.041 Fetching value of define "__AVX512BW__" : 1 00:48:34.041 Fetching value of define "__AVX512CD__" : 1 00:48:34.041 Fetching value of define "__AVX512DQ__" : 1 00:48:34.041 Fetching value of define "__AVX512F__" : 1 00:48:34.041 Fetching value of define "__AVX512VL__" : 1 00:48:34.041 Fetching value of define "__PCLMUL__" : 1 00:48:34.041 Fetching value of define "__RDRND__" : 1 00:48:34.041 Fetching value of define "__RDSEED__" : 1 00:48:34.041 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:48:34.041 Fetching value of define "__znver1__" : (undefined) 00:48:34.041 Fetching value of define "__znver2__" : (undefined) 00:48:34.041 Fetching value of define "__znver3__" : (undefined) 00:48:34.041 Fetching value of define "__znver4__" : (undefined) 00:48:34.041 Compiler for C supports arguments -ffat-lto-objects: YES 00:48:34.041 Library asan found: YES 00:48:34.042 Compiler for C supports arguments -Wno-format-truncation: YES 00:48:34.042 Message: lib/log: Defining dependency "log" 00:48:34.042 Message: lib/kvargs: Defining dependency "kvargs" 00:48:34.042 Message: lib/telemetry: Defining dependency "telemetry" 00:48:34.042 Library rt found: YES 00:48:34.042 Checking for function "getentropy" : NO 00:48:34.042 Message: lib/eal: Defining dependency "eal" 00:48:34.042 Message: lib/ring: Defining dependency "ring" 00:48:34.042 Message: lib/rcu: Defining dependency "rcu" 00:48:34.042 Message: lib/mempool: Defining dependency "mempool" 00:48:34.042 Message: lib/mbuf: Defining dependency "mbuf" 00:48:34.042 Fetching value of define "__PCLMUL__" : 1 (cached) 00:48:34.042 Fetching value of define "__AVX512F__" : 1 (cached) 00:48:34.042 Fetching value of define "__AVX512BW__" : 1 (cached) 00:48:34.042 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:48:34.042 Fetching value of define "__AVX512VL__" : 1 (cached) 00:48:34.042 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:48:34.042 Compiler for C supports arguments -mpclmul: YES 00:48:34.042 Compiler for C supports arguments -maes: YES 00:48:34.042 Compiler for C supports arguments -mavx512f: YES (cached) 00:48:34.042 Compiler for C supports arguments -mavx512bw: YES 00:48:34.042 Compiler for C supports arguments -mavx512dq: YES 00:48:34.042 Compiler for C supports arguments -mavx512vl: YES 00:48:34.042 Compiler for C supports arguments -mvpclmulqdq: YES 00:48:34.042 Compiler for C supports arguments -mavx2: YES 00:48:34.042 Compiler for C supports arguments -mavx: YES 00:48:34.042 Message: lib/net: Defining dependency "net" 00:48:34.042 Message: lib/meter: Defining dependency "meter" 00:48:34.042 Message: lib/ethdev: Defining dependency "ethdev" 00:48:34.042 Message: lib/pci: Defining dependency "pci" 00:48:34.042 Message: lib/cmdline: Defining dependency "cmdline" 00:48:34.042 Message: lib/hash: Defining dependency "hash" 00:48:34.042 Message: lib/timer: Defining dependency "timer" 00:48:34.042 Message: lib/compressdev: Defining dependency "compressdev" 00:48:34.042 Message: lib/cryptodev: Defining dependency "cryptodev" 00:48:34.042 Message: lib/dmadev: Defining dependency "dmadev" 00:48:34.042 Compiler for C supports arguments -Wno-cast-qual: YES 00:48:34.042 Message: lib/power: Defining dependency "power" 00:48:34.042 Message: lib/reorder: Defining dependency "reorder" 00:48:34.042 Message: lib/security: Defining dependency "security" 00:48:34.042 Has header "linux/userfaultfd.h" : YES 00:48:34.042 Has header "linux/vduse.h" : YES 00:48:34.042 Message: lib/vhost: Defining dependency "vhost" 00:48:34.042 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:48:34.042 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:48:34.042 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:48:34.042 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:48:34.042 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:48:34.042 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:48:34.042 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:48:34.042 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:48:34.042 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:48:34.042 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:48:34.042 Program doxygen found: YES (/usr/bin/doxygen) 00:48:34.042 Configuring doxy-api-html.conf using configuration 00:48:34.042 Configuring doxy-api-man.conf using configuration 00:48:34.042 Program mandb found: YES (/usr/bin/mandb) 00:48:34.042 Program sphinx-build found: NO 00:48:34.042 Configuring rte_build_config.h using configuration 00:48:34.042 Message: 00:48:34.042 ================= 00:48:34.042 Applications Enabled 00:48:34.042 ================= 00:48:34.042 00:48:34.042 apps: 00:48:34.042 00:48:34.042 00:48:34.042 Message: 00:48:34.042 ================= 00:48:34.042 Libraries Enabled 00:48:34.042 ================= 00:48:34.042 00:48:34.042 libs: 00:48:34.042 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:48:34.042 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:48:34.042 cryptodev, dmadev, power, reorder, security, vhost, 00:48:34.042 00:48:34.042 Message: 00:48:34.042 =============== 00:48:34.042 Drivers Enabled 00:48:34.042 =============== 00:48:34.042 00:48:34.042 common: 00:48:34.042 00:48:34.042 bus: 00:48:34.042 pci, vdev, 00:48:34.042 mempool: 00:48:34.042 ring, 00:48:34.042 dma: 00:48:34.042 00:48:34.042 net: 00:48:34.042 00:48:34.042 crypto: 00:48:34.042 00:48:34.042 compress: 00:48:34.042 00:48:34.042 vdpa: 00:48:34.042 00:48:34.042 00:48:34.042 Message: 00:48:34.042 ================= 00:48:34.042 Content Skipped 00:48:34.042 ================= 00:48:34.042 00:48:34.042 apps: 00:48:34.042 dumpcap: explicitly disabled via build config 00:48:34.042 graph: explicitly disabled via build config 00:48:34.042 pdump: explicitly disabled via build config 00:48:34.042 proc-info: explicitly disabled via build config 00:48:34.042 test-acl: explicitly disabled via build config 00:48:34.042 test-bbdev: explicitly disabled via build config 00:48:34.042 test-cmdline: explicitly disabled via build config 00:48:34.042 test-compress-perf: explicitly disabled via build config 00:48:34.042 test-crypto-perf: explicitly disabled via build config 00:48:34.042 test-dma-perf: explicitly disabled via build config 00:48:34.042 test-eventdev: explicitly disabled via build config 00:48:34.042 test-fib: explicitly disabled via build config 00:48:34.042 test-flow-perf: explicitly disabled via build config 00:48:34.042 test-gpudev: explicitly disabled via build config 00:48:34.042 test-mldev: explicitly disabled via build config 00:48:34.042 test-pipeline: explicitly disabled via build config 00:48:34.042 test-pmd: explicitly disabled via build config 00:48:34.042 test-regex: explicitly disabled via build config 00:48:34.042 test-sad: explicitly disabled via build config 00:48:34.042 test-security-perf: explicitly disabled via build config 00:48:34.042 00:48:34.042 libs: 00:48:34.042 argparse: explicitly disabled via build config 00:48:34.042 metrics: explicitly disabled via build config 00:48:34.042 acl: explicitly disabled via build config 00:48:34.042 bbdev: explicitly disabled via build config 00:48:34.042 bitratestats: explicitly disabled via build config 00:48:34.042 bpf: explicitly disabled via build config 00:48:34.042 cfgfile: explicitly disabled via build config 00:48:34.042 distributor: explicitly disabled via build config 00:48:34.042 efd: explicitly disabled via build config 00:48:34.042 eventdev: explicitly disabled via build config 00:48:34.042 dispatcher: explicitly disabled via build config 00:48:34.042 gpudev: explicitly disabled via build config 00:48:34.042 gro: explicitly disabled via build config 00:48:34.042 gso: explicitly disabled via build config 00:48:34.042 ip_frag: explicitly disabled via build config 00:48:34.042 jobstats: explicitly disabled via build config 00:48:34.042 latencystats: explicitly disabled via build config 00:48:34.042 lpm: explicitly disabled via build config 00:48:34.042 member: explicitly disabled via build config 00:48:34.042 pcapng: explicitly disabled via build config 00:48:34.042 rawdev: explicitly disabled via build config 00:48:34.042 regexdev: explicitly disabled via build config 00:48:34.042 mldev: explicitly disabled via build config 00:48:34.042 rib: explicitly disabled via build config 00:48:34.042 sched: explicitly disabled via build config 00:48:34.042 stack: explicitly disabled via build config 00:48:34.042 ipsec: explicitly disabled via build config 00:48:34.042 pdcp: explicitly disabled via build config 00:48:34.042 fib: explicitly disabled via build config 00:48:34.042 port: explicitly disabled via build config 00:48:34.042 pdump: explicitly disabled via build config 00:48:34.042 table: explicitly disabled via build config 00:48:34.042 pipeline: explicitly disabled via build config 00:48:34.042 graph: explicitly disabled via build config 00:48:34.042 node: explicitly disabled via build config 00:48:34.042 00:48:34.042 drivers: 00:48:34.042 common/cpt: not in enabled drivers build config 00:48:34.042 common/dpaax: not in enabled drivers build config 00:48:34.042 common/iavf: not in enabled drivers build config 00:48:34.042 common/idpf: not in enabled drivers build config 00:48:34.043 common/ionic: not in enabled drivers build config 00:48:34.043 common/mvep: not in enabled drivers build config 00:48:34.043 common/octeontx: not in enabled drivers build config 00:48:34.043 bus/auxiliary: not in enabled drivers build config 00:48:34.043 bus/cdx: not in enabled drivers build config 00:48:34.043 bus/dpaa: not in enabled drivers build config 00:48:34.043 bus/fslmc: not in enabled drivers build config 00:48:34.043 bus/ifpga: not in enabled drivers build config 00:48:34.043 bus/platform: not in enabled drivers build config 00:48:34.043 bus/uacce: not in enabled drivers build config 00:48:34.043 bus/vmbus: not in enabled drivers build config 00:48:34.043 common/cnxk: not in enabled drivers build config 00:48:34.043 common/mlx5: not in enabled drivers build config 00:48:34.043 common/nfp: not in enabled drivers build config 00:48:34.043 common/nitrox: not in enabled drivers build config 00:48:34.043 common/qat: not in enabled drivers build config 00:48:34.043 common/sfc_efx: not in enabled drivers build config 00:48:34.043 mempool/bucket: not in enabled drivers build config 00:48:34.043 mempool/cnxk: not in enabled drivers build config 00:48:34.043 mempool/dpaa: not in enabled drivers build config 00:48:34.043 mempool/dpaa2: not in enabled drivers build config 00:48:34.043 mempool/octeontx: not in enabled drivers build config 00:48:34.043 mempool/stack: not in enabled drivers build config 00:48:34.043 dma/cnxk: not in enabled drivers build config 00:48:34.043 dma/dpaa: not in enabled drivers build config 00:48:34.043 dma/dpaa2: not in enabled drivers build config 00:48:34.043 dma/hisilicon: not in enabled drivers build config 00:48:34.043 dma/idxd: not in enabled drivers build config 00:48:34.043 dma/ioat: not in enabled drivers build config 00:48:34.043 dma/skeleton: not in enabled drivers build config 00:48:34.043 net/af_packet: not in enabled drivers build config 00:48:34.043 net/af_xdp: not in enabled drivers build config 00:48:34.043 net/ark: not in enabled drivers build config 00:48:34.043 net/atlantic: not in enabled drivers build config 00:48:34.043 net/avp: not in enabled drivers build config 00:48:34.043 net/axgbe: not in enabled drivers build config 00:48:34.043 net/bnx2x: not in enabled drivers build config 00:48:34.043 net/bnxt: not in enabled drivers build config 00:48:34.043 net/bonding: not in enabled drivers build config 00:48:34.043 net/cnxk: not in enabled drivers build config 00:48:34.043 net/cpfl: not in enabled drivers build config 00:48:34.043 net/cxgbe: not in enabled drivers build config 00:48:34.043 net/dpaa: not in enabled drivers build config 00:48:34.043 net/dpaa2: not in enabled drivers build config 00:48:34.043 net/e1000: not in enabled drivers build config 00:48:34.043 net/ena: not in enabled drivers build config 00:48:34.043 net/enetc: not in enabled drivers build config 00:48:34.043 net/enetfec: not in enabled drivers build config 00:48:34.043 net/enic: not in enabled drivers build config 00:48:34.043 net/failsafe: not in enabled drivers build config 00:48:34.043 net/fm10k: not in enabled drivers build config 00:48:34.043 net/gve: not in enabled drivers build config 00:48:34.043 net/hinic: not in enabled drivers build config 00:48:34.043 net/hns3: not in enabled drivers build config 00:48:34.043 net/i40e: not in enabled drivers build config 00:48:34.043 net/iavf: not in enabled drivers build config 00:48:34.043 net/ice: not in enabled drivers build config 00:48:34.043 net/idpf: not in enabled drivers build config 00:48:34.043 net/igc: not in enabled drivers build config 00:48:34.043 net/ionic: not in enabled drivers build config 00:48:34.043 net/ipn3ke: not in enabled drivers build config 00:48:34.043 net/ixgbe: not in enabled drivers build config 00:48:34.043 net/mana: not in enabled drivers build config 00:48:34.043 net/memif: not in enabled drivers build config 00:48:34.043 net/mlx4: not in enabled drivers build config 00:48:34.043 net/mlx5: not in enabled drivers build config 00:48:34.043 net/mvneta: not in enabled drivers build config 00:48:34.043 net/mvpp2: not in enabled drivers build config 00:48:34.043 net/netvsc: not in enabled drivers build config 00:48:34.043 net/nfb: not in enabled drivers build config 00:48:34.043 net/nfp: not in enabled drivers build config 00:48:34.043 net/ngbe: not in enabled drivers build config 00:48:34.043 net/null: not in enabled drivers build config 00:48:34.043 net/octeontx: not in enabled drivers build config 00:48:34.043 net/octeon_ep: not in enabled drivers build config 00:48:34.043 net/pcap: not in enabled drivers build config 00:48:34.043 net/pfe: not in enabled drivers build config 00:48:34.043 net/qede: not in enabled drivers build config 00:48:34.043 net/ring: not in enabled drivers build config 00:48:34.043 net/sfc: not in enabled drivers build config 00:48:34.043 net/softnic: not in enabled drivers build config 00:48:34.043 net/tap: not in enabled drivers build config 00:48:34.043 net/thunderx: not in enabled drivers build config 00:48:34.043 net/txgbe: not in enabled drivers build config 00:48:34.043 net/vdev_netvsc: not in enabled drivers build config 00:48:34.043 net/vhost: not in enabled drivers build config 00:48:34.043 net/virtio: not in enabled drivers build config 00:48:34.043 net/vmxnet3: not in enabled drivers build config 00:48:34.043 raw/*: missing internal dependency, "rawdev" 00:48:34.043 crypto/armv8: not in enabled drivers build config 00:48:34.043 crypto/bcmfs: not in enabled drivers build config 00:48:34.043 crypto/caam_jr: not in enabled drivers build config 00:48:34.043 crypto/ccp: not in enabled drivers build config 00:48:34.043 crypto/cnxk: not in enabled drivers build config 00:48:34.043 crypto/dpaa_sec: not in enabled drivers build config 00:48:34.043 crypto/dpaa2_sec: not in enabled drivers build config 00:48:34.043 crypto/ipsec_mb: not in enabled drivers build config 00:48:34.043 crypto/mlx5: not in enabled drivers build config 00:48:34.043 crypto/mvsam: not in enabled drivers build config 00:48:34.043 crypto/nitrox: not in enabled drivers build config 00:48:34.043 crypto/null: not in enabled drivers build config 00:48:34.043 crypto/octeontx: not in enabled drivers build config 00:48:34.043 crypto/openssl: not in enabled drivers build config 00:48:34.043 crypto/scheduler: not in enabled drivers build config 00:48:34.043 crypto/uadk: not in enabled drivers build config 00:48:34.043 crypto/virtio: not in enabled drivers build config 00:48:34.043 compress/isal: not in enabled drivers build config 00:48:34.043 compress/mlx5: not in enabled drivers build config 00:48:34.043 compress/nitrox: not in enabled drivers build config 00:48:34.043 compress/octeontx: not in enabled drivers build config 00:48:34.043 compress/zlib: not in enabled drivers build config 00:48:34.043 regex/*: missing internal dependency, "regexdev" 00:48:34.043 ml/*: missing internal dependency, "mldev" 00:48:34.043 vdpa/ifc: not in enabled drivers build config 00:48:34.043 vdpa/mlx5: not in enabled drivers build config 00:48:34.043 vdpa/nfp: not in enabled drivers build config 00:48:34.043 vdpa/sfc: not in enabled drivers build config 00:48:34.043 event/*: missing internal dependency, "eventdev" 00:48:34.043 baseband/*: missing internal dependency, "bbdev" 00:48:34.043 gpu/*: missing internal dependency, "gpudev" 00:48:34.043 00:48:34.043 00:48:34.303 Build targets in project: 85 00:48:34.303 00:48:34.303 DPDK 24.03.0 00:48:34.303 00:48:34.303 User defined options 00:48:34.303 default_library : static 00:48:34.303 libdir : lib 00:48:34.303 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:48:34.303 b_lto : true 00:48:34.303 b_sanitize : address 00:48:34.303 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:48:34.303 c_link_args : -Wno-stringop-overflow -fcommon 00:48:34.303 cpu_instruction_set: native 00:48:34.303 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:48:34.303 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,argparse,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:48:34.303 enable_docs : false 00:48:34.303 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:48:34.303 enable_kmods : false 00:48:34.303 max_lcores : 128 00:48:34.303 tests : false 00:48:34.303 00:48:34.303 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:48:34.871 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:48:34.871 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:48:34.871 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:48:34.871 [3/268] Linking static target lib/librte_kvargs.a 00:48:35.130 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:48:35.130 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:48:35.130 [6/268] Linking static target lib/librte_log.a 00:48:35.130 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:48:35.130 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:48:35.130 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:48:35.130 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:48:35.130 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:48:35.130 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:48:35.130 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:48:35.130 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:48:35.389 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:48:35.389 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:48:35.389 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:48:35.648 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:48:35.648 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:48:35.648 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:48:35.648 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:48:35.648 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:48:35.648 [23/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:48:35.648 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:48:35.907 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:48:35.907 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:48:35.907 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:48:35.907 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:48:35.908 [29/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:48:35.908 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:48:35.908 [31/268] Linking static target lib/librte_telemetry.a 00:48:35.908 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:48:35.908 [33/268] Linking target lib/librte_log.so.24.1 00:48:35.908 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:48:36.166 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:48:36.166 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:48:36.166 [37/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:48:36.166 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:48:36.167 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:48:36.167 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:48:36.167 [41/268] Linking target lib/librte_kvargs.so.24.1 00:48:36.167 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:48:36.425 [43/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:48:36.425 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:48:36.425 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:48:36.425 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:48:36.425 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:48:36.685 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:48:36.685 [49/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:48:36.685 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:48:36.685 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:48:36.685 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:48:36.685 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:48:36.685 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:48:36.685 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:48:36.685 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:48:36.685 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:48:36.944 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:48:36.944 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:48:36.944 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:48:36.944 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:48:36.944 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:48:36.944 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:48:36.944 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:48:37.204 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:48:37.204 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:48:37.204 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:48:37.204 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:48:37.464 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:48:37.464 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:48:37.464 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:48:37.464 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:48:37.464 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:48:37.464 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:48:37.464 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:48:37.464 [76/268] Linking target lib/librte_telemetry.so.24.1 00:48:37.464 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:48:37.723 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:48:37.723 [79/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:48:37.723 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:48:37.723 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:48:37.723 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:48:37.723 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:48:37.723 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:48:37.723 [85/268] Linking static target lib/librte_ring.a 00:48:37.983 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:48:37.983 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:48:37.983 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:48:37.983 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:48:37.983 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:48:37.983 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:48:37.983 [92/268] Linking static target lib/librte_eal.a 00:48:38.242 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:48:38.242 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:48:38.242 [95/268] Linking static target lib/librte_mempool.a 00:48:38.502 [96/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:48:38.502 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:48:38.502 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:48:38.502 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:48:38.502 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:48:38.502 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:48:38.502 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:48:38.502 [103/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:48:38.502 [104/268] Linking static target lib/librte_rcu.a 00:48:38.761 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:48:38.761 [106/268] Linking static target lib/librte_meter.a 00:48:38.761 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:48:38.761 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:48:38.761 [109/268] Linking static target lib/librte_net.a 00:48:38.761 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:48:38.761 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:48:39.020 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:48:39.020 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:48:39.020 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:48:39.020 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:48:39.020 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:48:39.279 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:48:39.279 [118/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:48:39.279 [119/268] Linking static target lib/librte_mbuf.a 00:48:39.539 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:48:39.539 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:48:39.798 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:48:39.798 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:48:39.798 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:48:39.798 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:48:39.798 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:48:39.798 [127/268] Linking static target lib/librte_pci.a 00:48:39.798 [128/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:48:40.057 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:48:40.057 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:48:40.057 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:48:40.057 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:48:40.057 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:48:40.057 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:48:40.057 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:48:40.057 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:48:40.057 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:48:40.316 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:48:40.316 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:48:40.316 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:48:40.316 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:48:40.316 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:48:40.316 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:48:40.316 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:48:40.316 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:48:40.576 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:48:40.576 [147/268] Linking static target lib/librte_cmdline.a 00:48:40.576 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:48:40.576 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:48:40.835 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:48:40.835 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:48:40.835 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:48:40.835 [153/268] Linking static target lib/librte_timer.a 00:48:40.835 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:48:40.835 [155/268] Linking static target lib/librte_compressdev.a 00:48:41.095 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:48:41.095 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:48:41.095 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:48:41.355 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:48:41.355 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:48:41.355 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:48:41.355 [162/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:48:41.355 [163/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:48:41.355 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:48:41.614 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:48:41.614 [166/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:48:41.872 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:48:41.872 [168/268] Linking static target lib/librte_dmadev.a 00:48:41.872 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:48:41.872 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:48:42.131 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:48:42.131 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:48:42.131 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:48:42.131 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:48:42.389 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:48:42.389 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:48:42.389 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:48:42.389 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:48:42.389 [179/268] Linking static target lib/librte_power.a 00:48:42.648 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:48:42.648 [181/268] Linking static target lib/librte_reorder.a 00:48:42.906 [182/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:48:42.906 [183/268] Linking static target lib/librte_security.a 00:48:42.906 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:48:42.906 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:48:42.906 [186/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:48:43.164 [187/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:48:43.164 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:48:43.164 [189/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:48:43.164 [190/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:48:43.164 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:48:43.164 [192/268] Linking static target lib/librte_cryptodev.a 00:48:43.423 [193/268] Linking static target lib/librte_ethdev.a 00:48:43.682 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:48:43.942 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:48:43.942 [196/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:48:43.942 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:48:43.942 [198/268] Linking static target lib/librte_hash.a 00:48:44.201 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:48:44.201 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:48:44.201 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:48:44.769 [202/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:48:44.769 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:48:44.769 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:48:44.769 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:48:44.769 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:48:44.769 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:48:45.028 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:48:45.028 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:48:45.028 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:48:45.028 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:48:45.288 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:48:45.288 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:48:45.288 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:48:45.288 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:48:45.288 [216/268] Linking static target drivers/librte_bus_pci.a 00:48:45.288 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:48:45.288 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:48:45.288 [219/268] Linking static target drivers/librte_bus_vdev.a 00:48:45.288 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:48:45.288 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:48:45.547 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:48:45.547 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:48:45.547 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:48:45.547 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:48:45.547 [226/268] Linking static target drivers/librte_mempool_ring.a 00:48:45.806 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:48:46.066 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:48:51.340 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:48:55.527 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:48:55.527 [231/268] Linking target lib/librte_eal.so.24.1 00:48:55.527 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:48:55.786 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:48:55.786 [233/268] Linking target lib/librte_meter.so.24.1 00:48:56.044 [234/268] Linking target lib/librte_ring.so.24.1 00:48:56.044 [235/268] Linking target lib/librte_pci.so.24.1 00:48:56.044 [236/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:48:56.044 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:48:56.044 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:48:56.046 [239/268] Linking target lib/librte_timer.so.24.1 00:48:56.046 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:48:56.305 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:48:56.305 [242/268] Linking target lib/librte_dmadev.so.24.1 00:48:56.563 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:48:56.822 [244/268] Linking target lib/librte_mempool.so.24.1 00:48:56.822 [245/268] Linking target lib/librte_rcu.so.24.1 00:48:57.080 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:48:57.080 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:48:57.342 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:48:57.342 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:48:58.783 [250/268] Linking target lib/librte_mbuf.so.24.1 00:48:58.783 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:48:59.042 [252/268] Linking target lib/librte_reorder.so.24.1 00:48:59.302 [253/268] Linking target lib/librte_compressdev.so.24.1 00:48:59.560 [254/268] Linking target lib/librte_net.so.24.1 00:48:59.560 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:49:00.498 [256/268] Linking target lib/librte_cmdline.so.24.1 00:49:00.758 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:49:00.758 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:49:01.326 [259/268] Linking target lib/librte_security.so.24.1 00:49:03.231 [260/268] Linking target lib/librte_hash.so.24.1 00:49:03.489 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:49:10.062 [262/268] Linking target lib/librte_ethdev.so.24.1 00:49:10.062 lto-wrapper: warning: using serial compilation of 6 LTRANS jobs 00:49:10.062 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:49:11.968 [264/268] Linking target lib/librte_power.so.24.1 00:49:13.349 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:49:13.349 [266/268] Linking static target lib/librte_vhost.a 00:49:15.881 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:50:02.565 [268/268] Linking target lib/librte_vhost.so.24.1 00:50:02.565 lto-wrapper: warning: using serial compilation of 8 LTRANS jobs 00:50:02.565 INFO: autodetecting backend as ninja 00:50:02.565 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:50:02.565 CC lib/log/log.o 00:50:02.565 CC lib/log/log_deprecated.o 00:50:02.565 CC lib/log/log_flags.o 00:50:02.565 CC lib/ut/ut.o 00:50:02.565 CC lib/ut_mock/mock.o 00:50:02.565 LIB libspdk_log.a 00:50:02.565 LIB libspdk_ut.a 00:50:02.565 LIB libspdk_ut_mock.a 00:50:02.565 CC lib/dma/dma.o 00:50:02.565 CXX lib/trace_parser/trace.o 00:50:02.565 CC lib/util/base64.o 00:50:02.565 CC lib/util/bit_array.o 00:50:02.565 CC lib/util/crc32.o 00:50:02.565 CC lib/util/crc16.o 00:50:02.565 CC lib/util/crc32c.o 00:50:02.565 CC lib/util/cpuset.o 00:50:02.565 CC lib/ioat/ioat.o 00:50:02.565 CC lib/util/crc32_ieee.o 00:50:02.565 CC lib/vfio_user/host/vfio_user_pci.o 00:50:02.565 CC lib/util/crc64.o 00:50:02.565 CC lib/vfio_user/host/vfio_user.o 00:50:02.565 CC lib/util/dif.o 00:50:02.565 CC lib/util/fd.o 00:50:02.565 CC lib/util/fd_group.o 00:50:02.565 LIB libspdk_ioat.a 00:50:02.565 LIB libspdk_dma.a 00:50:02.565 CC lib/util/file.o 00:50:02.565 CC lib/util/hexlify.o 00:50:02.565 CC lib/util/iov.o 00:50:02.565 CC lib/util/math.o 00:50:02.565 CC lib/util/net.o 00:50:02.565 LIB libspdk_vfio_user.a 00:50:02.565 CC lib/util/pipe.o 00:50:02.565 CC lib/util/strerror_tls.o 00:50:02.565 CC lib/util/string.o 00:50:02.565 CC lib/util/uuid.o 00:50:02.565 CC lib/util/xor.o 00:50:02.565 CC lib/util/zipf.o 00:50:02.565 LIB libspdk_util.a 00:50:02.565 LIB libspdk_trace_parser.a 00:50:02.565 CC lib/conf/conf.o 00:50:02.565 CC lib/rdma_provider/common.o 00:50:02.565 CC lib/rdma_provider/rdma_provider_verbs.o 00:50:02.565 CC lib/idxd/idxd.o 00:50:02.565 CC lib/vmd/vmd.o 00:50:02.565 CC lib/idxd/idxd_user.o 00:50:02.565 CC lib/json/json_parse.o 00:50:02.565 CC lib/env_dpdk/env.o 00:50:02.565 CC lib/rdma_utils/rdma_utils.o 00:50:02.565 CC lib/env_dpdk/memory.o 00:50:02.565 CC lib/env_dpdk/pci.o 00:50:02.565 LIB libspdk_rdma_provider.a 00:50:02.565 LIB libspdk_conf.a 00:50:02.565 CC lib/json/json_util.o 00:50:02.565 CC lib/json/json_write.o 00:50:02.565 CC lib/env_dpdk/init.o 00:50:02.565 LIB libspdk_rdma_utils.a 00:50:02.565 CC lib/env_dpdk/threads.o 00:50:02.565 CC lib/env_dpdk/pci_ioat.o 00:50:02.565 LIB libspdk_idxd.a 00:50:02.565 CC lib/vmd/led.o 00:50:02.565 CC lib/env_dpdk/pci_virtio.o 00:50:02.565 CC lib/env_dpdk/pci_vmd.o 00:50:02.565 CC lib/env_dpdk/pci_idxd.o 00:50:02.565 LIB libspdk_json.a 00:50:02.565 CC lib/env_dpdk/pci_event.o 00:50:02.565 CC lib/env_dpdk/sigbus_handler.o 00:50:02.565 CC lib/env_dpdk/pci_dpdk.o 00:50:02.565 CC lib/env_dpdk/pci_dpdk_2207.o 00:50:02.565 CC lib/env_dpdk/pci_dpdk_2211.o 00:50:02.565 LIB libspdk_vmd.a 00:50:02.565 CC lib/jsonrpc/jsonrpc_server.o 00:50:02.565 CC lib/jsonrpc/jsonrpc_client.o 00:50:02.565 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:50:02.565 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:50:02.825 LIB libspdk_jsonrpc.a 00:50:02.825 LIB libspdk_env_dpdk.a 00:50:03.084 CC lib/rpc/rpc.o 00:50:03.344 LIB libspdk_rpc.a 00:50:03.344 CC lib/trace/trace.o 00:50:03.344 CC lib/trace/trace_flags.o 00:50:03.344 CC lib/trace/trace_rpc.o 00:50:03.603 CC lib/notify/notify.o 00:50:03.603 CC lib/notify/notify_rpc.o 00:50:03.603 CC lib/keyring/keyring_rpc.o 00:50:03.603 CC lib/keyring/keyring.o 00:50:03.603 LIB libspdk_keyring.a 00:50:03.603 LIB libspdk_notify.a 00:50:03.603 LIB libspdk_trace.a 00:50:03.863 CC lib/sock/sock.o 00:50:03.863 CC lib/sock/sock_rpc.o 00:50:03.863 CC lib/thread/thread.o 00:50:03.863 CC lib/thread/iobuf.o 00:50:04.432 LIB libspdk_sock.a 00:50:04.691 CC lib/nvme/nvme_ctrlr_cmd.o 00:50:04.691 CC lib/nvme/nvme_fabric.o 00:50:04.691 CC lib/nvme/nvme_ctrlr.o 00:50:04.691 CC lib/nvme/nvme_ns_cmd.o 00:50:04.691 CC lib/nvme/nvme_ns.o 00:50:04.691 CC lib/nvme/nvme_pcie_common.o 00:50:04.691 CC lib/nvme/nvme_qpair.o 00:50:04.691 CC lib/nvme/nvme.o 00:50:04.691 CC lib/nvme/nvme_pcie.o 00:50:04.691 LIB libspdk_thread.a 00:50:04.691 CC lib/nvme/nvme_quirks.o 00:50:04.950 CC lib/nvme/nvme_transport.o 00:50:05.209 CC lib/nvme/nvme_discovery.o 00:50:05.209 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:50:05.209 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:50:05.209 CC lib/nvme/nvme_tcp.o 00:50:05.209 CC lib/nvme/nvme_opal.o 00:50:05.209 CC lib/nvme/nvme_io_msg.o 00:50:05.209 CC lib/nvme/nvme_poll_group.o 00:50:05.209 CC lib/accel/accel.o 00:50:05.468 CC lib/nvme/nvme_zns.o 00:50:05.468 CC lib/accel/accel_rpc.o 00:50:05.468 CC lib/nvme/nvme_stubs.o 00:50:05.727 CC lib/accel/accel_sw.o 00:50:05.727 CC lib/nvme/nvme_auth.o 00:50:05.727 CC lib/nvme/nvme_cuse.o 00:50:05.727 CC lib/init/json_config.o 00:50:05.727 CC lib/blob/blobstore.o 00:50:05.727 CC lib/virtio/virtio.o 00:50:05.727 CC lib/virtio/virtio_vhost_user.o 00:50:05.727 LIB libspdk_accel.a 00:50:05.727 CC lib/nvme/nvme_rdma.o 00:50:05.986 CC lib/init/subsystem.o 00:50:05.986 CC lib/init/subsystem_rpc.o 00:50:05.986 CC lib/init/rpc.o 00:50:05.986 CC lib/virtio/virtio_vfio_user.o 00:50:05.986 CC lib/virtio/virtio_pci.o 00:50:05.986 CC lib/blob/request.o 00:50:05.986 CC lib/blob/zeroes.o 00:50:05.986 CC lib/bdev/bdev.o 00:50:05.986 LIB libspdk_init.a 00:50:05.986 CC lib/bdev/bdev_rpc.o 00:50:05.986 LIB libspdk_virtio.a 00:50:05.986 CC lib/blob/blob_bs_dev.o 00:50:06.245 CC lib/bdev/bdev_zone.o 00:50:06.245 CC lib/bdev/part.o 00:50:06.245 CC lib/bdev/scsi_nvme.o 00:50:06.245 CC lib/event/app.o 00:50:06.245 CC lib/event/reactor.o 00:50:06.245 CC lib/event/log_rpc.o 00:50:06.245 CC lib/event/app_rpc.o 00:50:06.245 CC lib/event/scheduler_static.o 00:50:06.525 LIB libspdk_event.a 00:50:06.525 LIB libspdk_nvme.a 00:50:06.793 LIB libspdk_blob.a 00:50:07.052 LIB libspdk_bdev.a 00:50:07.052 CC lib/blobfs/blobfs.o 00:50:07.052 CC lib/blobfs/tree.o 00:50:07.052 CC lib/lvol/lvol.o 00:50:07.052 CC lib/nbd/nbd.o 00:50:07.052 CC lib/nbd/nbd_rpc.o 00:50:07.052 CC lib/nvmf/ctrlr.o 00:50:07.052 CC lib/nvmf/ctrlr_discovery.o 00:50:07.052 CC lib/nvmf/ctrlr_bdev.o 00:50:07.052 CC lib/scsi/dev.o 00:50:07.052 CC lib/ftl/ftl_core.o 00:50:07.052 CC lib/ftl/ftl_init.o 00:50:07.311 CC lib/ftl/ftl_layout.o 00:50:07.311 CC lib/scsi/lun.o 00:50:07.311 CC lib/scsi/port.o 00:50:07.311 LIB libspdk_nbd.a 00:50:07.311 CC lib/scsi/scsi.o 00:50:07.311 CC lib/nvmf/subsystem.o 00:50:07.311 CC lib/nvmf/nvmf.o 00:50:07.311 CC lib/scsi/scsi_bdev.o 00:50:07.311 LIB libspdk_blobfs.a 00:50:07.311 CC lib/nvmf/nvmf_rpc.o 00:50:07.311 CC lib/nvmf/transport.o 00:50:07.311 CC lib/ftl/ftl_debug.o 00:50:07.311 CC lib/ftl/ftl_io.o 00:50:07.311 CC lib/nvmf/tcp.o 00:50:07.312 LIB libspdk_lvol.a 00:50:07.571 CC lib/nvmf/stubs.o 00:50:07.571 CC lib/nvmf/mdns_server.o 00:50:07.571 CC lib/scsi/scsi_pr.o 00:50:07.571 CC lib/ftl/ftl_sb.o 00:50:07.571 CC lib/ftl/ftl_l2p.o 00:50:07.571 CC lib/nvmf/rdma.o 00:50:07.571 CC lib/scsi/scsi_rpc.o 00:50:07.571 CC lib/scsi/task.o 00:50:07.571 CC lib/ftl/ftl_l2p_flat.o 00:50:07.571 CC lib/nvmf/auth.o 00:50:07.830 CC lib/ftl/ftl_nv_cache.o 00:50:07.830 CC lib/ftl/ftl_band.o 00:50:07.830 CC lib/ftl/ftl_band_ops.o 00:50:07.830 CC lib/ftl/ftl_writer.o 00:50:07.830 LIB libspdk_scsi.a 00:50:07.830 CC lib/ftl/ftl_rq.o 00:50:07.830 CC lib/ftl/ftl_reloc.o 00:50:07.830 CC lib/iscsi/conn.o 00:50:07.830 CC lib/ftl/ftl_l2p_cache.o 00:50:07.830 CC lib/vhost/vhost.o 00:50:07.830 CC lib/vhost/vhost_rpc.o 00:50:07.830 CC lib/vhost/vhost_scsi.o 00:50:07.830 CC lib/vhost/vhost_blk.o 00:50:07.830 CC lib/vhost/rte_vhost_user.o 00:50:08.090 CC lib/ftl/ftl_p2l.o 00:50:08.090 CC lib/ftl/mngt/ftl_mngt.o 00:50:08.090 CC lib/iscsi/init_grp.o 00:50:08.090 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:50:08.348 LIB libspdk_nvmf.a 00:50:08.348 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:50:08.348 CC lib/ftl/mngt/ftl_mngt_startup.o 00:50:08.348 CC lib/ftl/mngt/ftl_mngt_md.o 00:50:08.348 CC lib/ftl/mngt/ftl_mngt_misc.o 00:50:08.348 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:50:08.348 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:50:08.348 CC lib/iscsi/iscsi.o 00:50:08.348 CC lib/iscsi/md5.o 00:50:08.348 CC lib/iscsi/param.o 00:50:08.348 CC lib/ftl/mngt/ftl_mngt_band.o 00:50:08.608 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:50:08.608 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:50:08.608 CC lib/iscsi/portal_grp.o 00:50:08.608 CC lib/iscsi/tgt_node.o 00:50:08.608 CC lib/iscsi/iscsi_subsystem.o 00:50:08.608 LIB libspdk_vhost.a 00:50:08.608 CC lib/iscsi/iscsi_rpc.o 00:50:08.608 CC lib/iscsi/task.o 00:50:08.608 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:50:08.608 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:50:08.608 CC lib/ftl/utils/ftl_conf.o 00:50:08.608 CC lib/ftl/utils/ftl_md.o 00:50:08.608 CC lib/ftl/utils/ftl_mempool.o 00:50:08.608 CC lib/ftl/utils/ftl_bitmap.o 00:50:08.868 CC lib/ftl/utils/ftl_property.o 00:50:08.868 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:50:08.868 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:50:08.868 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:50:08.868 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:50:08.868 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:50:08.868 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:50:08.868 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:50:08.868 CC lib/ftl/upgrade/ftl_sb_v3.o 00:50:08.868 CC lib/ftl/upgrade/ftl_sb_v5.o 00:50:08.868 CC lib/ftl/nvc/ftl_nvc_dev.o 00:50:08.868 LIB libspdk_iscsi.a 00:50:08.868 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:50:08.868 CC lib/ftl/base/ftl_base_dev.o 00:50:08.868 CC lib/ftl/base/ftl_base_bdev.o 00:50:09.127 LIB libspdk_ftl.a 00:50:09.386 CC module/env_dpdk/env_dpdk_rpc.o 00:50:09.646 CC module/accel/ioat/accel_ioat.o 00:50:09.646 CC module/blob/bdev/blob_bdev.o 00:50:09.646 CC module/scheduler/gscheduler/gscheduler.o 00:50:09.646 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:50:09.646 CC module/sock/posix/posix.o 00:50:09.646 CC module/keyring/file/keyring.o 00:50:09.646 CC module/scheduler/dynamic/scheduler_dynamic.o 00:50:09.646 CC module/accel/error/accel_error.o 00:50:09.646 CC module/accel/dsa/accel_dsa.o 00:50:09.646 LIB libspdk_env_dpdk_rpc.a 00:50:09.646 CC module/keyring/file/keyring_rpc.o 00:50:09.646 LIB libspdk_scheduler_dpdk_governor.a 00:50:09.646 LIB libspdk_scheduler_gscheduler.a 00:50:09.646 CC module/accel/ioat/accel_ioat_rpc.o 00:50:09.646 CC module/accel/error/accel_error_rpc.o 00:50:09.646 CC module/accel/dsa/accel_dsa_rpc.o 00:50:09.646 LIB libspdk_blob_bdev.a 00:50:09.646 LIB libspdk_scheduler_dynamic.a 00:50:09.646 LIB libspdk_keyring_file.a 00:50:09.646 LIB libspdk_accel_ioat.a 00:50:09.905 LIB libspdk_accel_error.a 00:50:09.905 LIB libspdk_accel_dsa.a 00:50:09.905 CC module/keyring/linux/keyring.o 00:50:09.905 CC module/accel/iaa/accel_iaa.o 00:50:09.905 CC module/keyring/linux/keyring_rpc.o 00:50:09.905 CC module/bdev/error/vbdev_error.o 00:50:09.905 CC module/bdev/delay/vbdev_delay.o 00:50:09.905 LIB libspdk_sock_posix.a 00:50:09.905 LIB libspdk_keyring_linux.a 00:50:09.905 CC module/bdev/delay/vbdev_delay_rpc.o 00:50:09.905 CC module/bdev/lvol/vbdev_lvol.o 00:50:09.905 CC module/bdev/gpt/gpt.o 00:50:09.905 CC module/blobfs/bdev/blobfs_bdev.o 00:50:09.905 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:50:09.905 CC module/bdev/malloc/bdev_malloc.o 00:50:09.905 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:50:09.905 CC module/accel/iaa/accel_iaa_rpc.o 00:50:09.905 CC module/bdev/gpt/vbdev_gpt.o 00:50:10.164 LIB libspdk_accel_iaa.a 00:50:10.164 CC module/bdev/malloc/bdev_malloc_rpc.o 00:50:10.164 CC module/bdev/error/vbdev_error_rpc.o 00:50:10.164 LIB libspdk_blobfs_bdev.a 00:50:10.164 LIB libspdk_bdev_delay.a 00:50:10.164 LIB libspdk_bdev_lvol.a 00:50:10.164 LIB libspdk_bdev_malloc.a 00:50:10.164 CC module/bdev/nvme/bdev_nvme.o 00:50:10.164 LIB libspdk_bdev_gpt.a 00:50:10.164 CC module/bdev/null/bdev_null.o 00:50:10.164 LIB libspdk_bdev_error.a 00:50:10.164 CC module/bdev/passthru/vbdev_passthru.o 00:50:10.164 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:50:10.164 CC module/bdev/null/bdev_null_rpc.o 00:50:10.164 CC module/bdev/raid/bdev_raid.o 00:50:10.164 CC module/bdev/split/vbdev_split.o 00:50:10.423 CC module/bdev/zone_block/vbdev_zone_block.o 00:50:10.423 CC module/bdev/aio/bdev_aio.o 00:50:10.423 CC module/bdev/aio/bdev_aio_rpc.o 00:50:10.423 CC module/bdev/split/vbdev_split_rpc.o 00:50:10.423 CC module/bdev/ftl/bdev_ftl.o 00:50:10.423 LIB libspdk_bdev_null.a 00:50:10.423 LIB libspdk_bdev_passthru.a 00:50:10.424 CC module/bdev/ftl/bdev_ftl_rpc.o 00:50:10.424 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:50:10.424 LIB libspdk_bdev_split.a 00:50:10.424 CC module/bdev/raid/bdev_raid_rpc.o 00:50:10.424 CC module/bdev/raid/bdev_raid_sb.o 00:50:10.424 CC module/bdev/nvme/bdev_nvme_rpc.o 00:50:10.424 LIB libspdk_bdev_aio.a 00:50:10.424 LIB libspdk_bdev_ftl.a 00:50:10.682 LIB libspdk_bdev_zone_block.a 00:50:10.682 CC module/bdev/iscsi/bdev_iscsi.o 00:50:10.682 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:50:10.682 CC module/bdev/nvme/nvme_rpc.o 00:50:10.682 CC module/bdev/nvme/bdev_mdns_client.o 00:50:10.682 CC module/bdev/virtio/bdev_virtio_scsi.o 00:50:10.683 CC module/bdev/virtio/bdev_virtio_blk.o 00:50:10.683 CC module/bdev/raid/raid0.o 00:50:10.683 CC module/bdev/virtio/bdev_virtio_rpc.o 00:50:10.683 CC module/bdev/raid/raid1.o 00:50:10.683 CC module/bdev/nvme/vbdev_opal.o 00:50:10.683 LIB libspdk_bdev_iscsi.a 00:50:10.683 CC module/bdev/nvme/vbdev_opal_rpc.o 00:50:10.683 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:50:10.941 CC module/bdev/raid/concat.o 00:50:10.941 CC module/bdev/raid/raid5f.o 00:50:10.941 LIB libspdk_bdev_virtio.a 00:50:10.941 LIB libspdk_bdev_nvme.a 00:50:10.941 LIB libspdk_bdev_raid.a 00:50:11.509 CC module/event/subsystems/keyring/keyring.o 00:50:11.509 CC module/event/subsystems/vmd/vmd.o 00:50:11.509 CC module/event/subsystems/vmd/vmd_rpc.o 00:50:11.509 CC module/event/subsystems/iobuf/iobuf.o 00:50:11.509 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:50:11.509 CC module/event/subsystems/sock/sock.o 00:50:11.509 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:50:11.509 CC module/event/subsystems/scheduler/scheduler.o 00:50:11.509 LIB libspdk_event_keyring.a 00:50:11.509 LIB libspdk_event_vmd.a 00:50:11.509 LIB libspdk_event_scheduler.a 00:50:11.509 LIB libspdk_event_vhost_blk.a 00:50:11.509 LIB libspdk_event_sock.a 00:50:11.509 LIB libspdk_event_iobuf.a 00:50:11.768 CC module/event/subsystems/accel/accel.o 00:50:12.027 LIB libspdk_event_accel.a 00:50:12.286 CC module/event/subsystems/bdev/bdev.o 00:50:12.545 LIB libspdk_event_bdev.a 00:50:12.805 CC module/event/subsystems/nbd/nbd.o 00:50:12.805 CC module/event/subsystems/scsi/scsi.o 00:50:12.805 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:50:12.805 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:50:12.805 LIB libspdk_event_nbd.a 00:50:12.805 LIB libspdk_event_scsi.a 00:50:13.064 LIB libspdk_event_nvmf.a 00:50:13.064 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:50:13.323 CC module/event/subsystems/iscsi/iscsi.o 00:50:13.323 LIB libspdk_event_vhost_scsi.a 00:50:13.323 LIB libspdk_event_iscsi.a 00:50:13.582 CXX app/trace/trace.o 00:50:13.582 CC app/spdk_nvme_perf/perf.o 00:50:13.582 CC app/trace_record/trace_record.o 00:50:13.582 CC app/spdk_lspci/spdk_lspci.o 00:50:13.582 CC app/nvmf_tgt/nvmf_main.o 00:50:13.842 CC app/iscsi_tgt/iscsi_tgt.o 00:50:13.842 CC app/spdk_tgt/spdk_tgt.o 00:50:13.842 CC examples/util/zipf/zipf.o 00:50:13.842 CC test/thread/poller_perf/poller_perf.o 00:50:13.842 CC test/dma/test_dma/test_dma.o 00:50:13.842 LINK spdk_lspci 00:50:13.842 LINK spdk_trace_record 00:50:13.842 LINK nvmf_tgt 00:50:13.842 LINK zipf 00:50:14.101 LINK iscsi_tgt 00:50:14.101 LINK poller_perf 00:50:14.101 LINK spdk_tgt 00:50:14.101 LINK spdk_trace 00:50:14.101 LINK test_dma 00:50:14.101 LINK spdk_nvme_perf 00:50:24.089 CC test/thread/lock/spdk_lock.o 00:50:25.467 CC examples/ioat/perf/perf.o 00:50:26.035 CC app/spdk_nvme_identify/identify.o 00:50:26.035 LINK spdk_lock 00:50:26.295 LINK ioat_perf 00:50:28.836 LINK spdk_nvme_identify 00:50:43.723 CC app/spdk_nvme_discover/discovery_aer.o 00:50:43.723 LINK spdk_nvme_discover 00:50:56.003 CC examples/vmd/lsvmd/lsvmd.o 00:50:56.003 LINK lsvmd 00:50:56.940 CC examples/vmd/led/led.o 00:50:57.509 LINK led 00:51:00.047 CC examples/ioat/verify/verify.o 00:51:00.983 LINK verify 00:51:09.102 CC test/app/bdev_svc/bdev_svc.o 00:51:09.102 LINK bdev_svc 00:51:35.654 TEST_HEADER include/spdk/config.h 00:51:35.654 CXX test/cpp_headers/accel.o 00:51:35.654 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:51:35.654 CXX test/cpp_headers/accel_module.o 00:51:35.654 LINK nvme_fuzz 00:51:35.913 CXX test/cpp_headers/assert.o 00:51:37.290 CXX test/cpp_headers/barrier.o 00:51:38.227 CXX test/cpp_headers/base64.o 00:51:39.603 CXX test/cpp_headers/bdev.o 00:51:40.540 CXX test/cpp_headers/bdev_module.o 00:51:41.477 CXX test/cpp_headers/bdev_zone.o 00:51:41.736 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:51:42.672 CXX test/cpp_headers/bit_array.o 00:51:43.608 CXX test/cpp_headers/bit_pool.o 00:51:44.176 CXX test/cpp_headers/blob.o 00:51:45.110 CXX test/cpp_headers/blob_bdev.o 00:51:46.046 LINK iscsi_fuzz 00:51:46.046 CXX test/cpp_headers/blobfs.o 00:51:46.984 CXX test/cpp_headers/blobfs_bdev.o 00:51:47.963 CXX test/cpp_headers/conf.o 00:51:48.904 CXX test/cpp_headers/config.o 00:51:49.163 CXX test/cpp_headers/cpuset.o 00:51:50.101 CXX test/cpp_headers/crc16.o 00:51:51.039 CC app/spdk_top/spdk_top.o 00:51:51.039 CXX test/cpp_headers/crc32.o 00:51:51.977 CC examples/idxd/perf/perf.o 00:51:51.977 CXX test/cpp_headers/crc64.o 00:51:53.355 CXX test/cpp_headers/dif.o 00:51:53.355 LINK idxd_perf 00:51:53.921 CXX test/cpp_headers/dma.o 00:51:54.179 LINK spdk_top 00:51:54.745 CXX test/cpp_headers/endian.o 00:51:56.121 CXX test/cpp_headers/env.o 00:51:56.689 CXX test/cpp_headers/env_dpdk.o 00:51:57.256 CXX test/cpp_headers/event.o 00:51:58.190 CXX test/cpp_headers/fd.o 00:51:59.568 CXX test/cpp_headers/fd_group.o 00:52:00.945 CXX test/cpp_headers/file.o 00:52:01.883 CXX test/cpp_headers/ftl.o 00:52:03.261 CXX test/cpp_headers/gpt_spec.o 00:52:04.638 CXX test/cpp_headers/hexlify.o 00:52:06.544 CXX test/cpp_headers/histogram_data.o 00:52:07.921 CXX test/cpp_headers/idxd.o 00:52:09.297 CXX test/cpp_headers/idxd_spec.o 00:52:10.672 CXX test/cpp_headers/init.o 00:52:12.046 CXX test/cpp_headers/ioat.o 00:52:13.432 CXX test/cpp_headers/ioat_spec.o 00:52:14.834 CXX test/cpp_headers/iscsi_spec.o 00:52:16.227 CXX test/cpp_headers/json.o 00:52:17.604 CXX test/cpp_headers/jsonrpc.o 00:52:18.979 CXX test/cpp_headers/keyring.o 00:52:20.357 CXX test/cpp_headers/keyring_module.o 00:52:21.733 CXX test/cpp_headers/likely.o 00:52:22.669 CXX test/cpp_headers/log.o 00:52:23.619 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:52:23.878 CXX test/cpp_headers/lvol.o 00:52:24.444 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:52:24.703 CXX test/cpp_headers/memory.o 00:52:25.638 LINK vhost_fuzz 00:52:25.638 CXX test/cpp_headers/mmio.o 00:52:26.573 CXX test/cpp_headers/nbd.o 00:52:26.573 CXX test/cpp_headers/net.o 00:52:27.947 CXX test/cpp_headers/notify.o 00:52:28.205 CXX test/cpp_headers/nvme.o 00:52:28.205 CC examples/interrupt_tgt/interrupt_tgt.o 00:52:29.139 CXX test/cpp_headers/nvme_intel.o 00:52:29.139 LINK interrupt_tgt 00:52:29.705 CXX test/cpp_headers/nvme_ocssd.o 00:52:30.643 CXX test/cpp_headers/nvme_ocssd_spec.o 00:52:31.210 CXX test/cpp_headers/nvme_spec.o 00:52:32.147 CC app/vhost/vhost.o 00:52:32.147 CXX test/cpp_headers/nvme_zns.o 00:52:32.714 LINK vhost 00:52:32.714 CXX test/cpp_headers/nvmf.o 00:52:33.648 CXX test/cpp_headers/nvmf_cmd.o 00:52:34.584 CXX test/cpp_headers/nvmf_fc_spec.o 00:52:34.584 CC app/spdk_dd/spdk_dd.o 00:52:35.152 CXX test/cpp_headers/nvmf_spec.o 00:52:35.410 LINK spdk_dd 00:52:35.669 CXX test/cpp_headers/nvmf_transport.o 00:52:36.236 CXX test/cpp_headers/opal.o 00:52:37.173 CC test/app/histogram_perf/histogram_perf.o 00:52:37.173 CXX test/cpp_headers/opal_spec.o 00:52:37.432 LINK histogram_perf 00:52:37.689 CXX test/cpp_headers/pci_ids.o 00:52:38.256 CXX test/cpp_headers/pipe.o 00:52:38.515 CXX test/cpp_headers/queue.o 00:52:39.082 CXX test/cpp_headers/reduce.o 00:52:39.341 CXX test/cpp_headers/rpc.o 00:52:39.599 CC app/fio/nvme/fio_plugin.o 00:52:39.865 CXX test/cpp_headers/scheduler.o 00:52:40.838 CXX test/cpp_headers/scsi.o 00:52:41.406 LINK spdk_nvme 00:52:41.406 CXX test/cpp_headers/scsi_spec.o 00:52:41.973 CC test/env/mem_callbacks/mem_callbacks.o 00:52:42.231 CXX test/cpp_headers/sock.o 00:52:43.166 CXX test/cpp_headers/stdinc.o 00:52:43.733 CXX test/cpp_headers/string.o 00:52:43.733 LINK mem_callbacks 00:52:44.300 CXX test/cpp_headers/thread.o 00:52:44.867 CXX test/cpp_headers/trace.o 00:52:45.804 CXX test/cpp_headers/trace_parser.o 00:52:46.372 CXX test/cpp_headers/tree.o 00:52:46.372 CXX test/cpp_headers/ublk.o 00:52:47.308 CXX test/cpp_headers/util.o 00:52:47.876 CXX test/cpp_headers/uuid.o 00:52:48.814 CXX test/cpp_headers/version.o 00:52:48.814 CXX test/cpp_headers/vfio_user_pci.o 00:52:49.382 CXX test/cpp_headers/vfio_user_spec.o 00:52:49.382 CXX test/cpp_headers/vhost.o 00:52:50.759 CXX test/cpp_headers/vmd.o 00:52:51.697 CXX test/cpp_headers/xor.o 00:52:51.697 CC test/event/event_perf/event_perf.o 00:52:52.634 CXX test/cpp_headers/zipf.o 00:52:52.634 LINK event_perf 00:52:54.009 CC test/event/reactor/reactor.o 00:52:54.943 LINK reactor 00:52:56.321 CC test/event/reactor_perf/reactor_perf.o 00:52:56.887 LINK reactor_perf 00:52:58.791 CC test/event/app_repeat/app_repeat.o 00:52:59.358 LINK app_repeat 00:53:17.447 CC test/env/vtophys/vtophys.o 00:53:17.447 LINK vtophys 00:53:22.800 CC test/app/jsoncat/jsoncat.o 00:53:23.736 LINK jsoncat 00:53:35.943 CC test/app/stub/stub.o 00:53:35.943 LINK stub 00:53:37.849 CC test/nvme/aer/aer.o 00:53:38.417 CC test/nvme/reset/reset.o 00:53:38.986 LINK aer 00:53:39.553 LINK reset 00:53:46.121 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:53:46.379 CC test/env/memory/memory_ut.o 00:53:46.637 LINK env_dpdk_post_init 00:53:50.825 LINK memory_ut 00:53:51.083 CC examples/thread/thread/thread_ex.o 00:53:52.460 LINK thread 00:53:57.730 CC test/event/scheduler/scheduler.o 00:53:59.107 LINK scheduler 00:54:00.485 CC examples/sock/hello_world/hello_sock.o 00:54:01.865 LINK hello_sock 00:54:40.617 CC test/env/pci/pci_ut.o 00:54:40.617 LINK pci_ut 00:54:40.617 CC test/nvme/sgl/sgl.o 00:54:40.617 CC test/nvme/e2edp/nvme_dp.o 00:54:40.617 LINK sgl 00:54:40.617 LINK nvme_dp 00:54:40.617 CC test/nvme/overhead/overhead.o 00:54:40.617 LINK overhead 00:54:43.150 CC test/nvme/err_injection/err_injection.o 00:54:43.717 LINK err_injection 00:54:43.976 CC test/nvme/startup/startup.o 00:54:44.543 LINK startup 00:54:54.517 CC test/nvme/reserve/reserve.o 00:54:55.452 LINK reserve 00:54:57.982 CC test/nvme/simple_copy/simple_copy.o 00:54:59.354 LINK simple_copy 00:55:31.425 CC test/rpc_client/rpc_client_test.o 00:55:31.425 LINK rpc_client_test 00:55:31.425 CC test/nvme/connect_stress/connect_stress.o 00:55:31.425 LINK connect_stress 00:55:31.425 CC test/nvme/boot_partition/boot_partition.o 00:55:31.994 LINK boot_partition 00:55:36.195 CC test/nvme/compliance/nvme_compliance.o 00:55:36.816 CC test/nvme/fused_ordering/fused_ordering.o 00:55:36.816 LINK nvme_compliance 00:55:37.752 LINK fused_ordering 00:55:47.727 CC test/nvme/doorbell_aers/doorbell_aers.o 00:55:47.727 LINK doorbell_aers 00:55:49.633 CC app/fio/bdev/fio_plugin.o 00:55:51.011 LINK spdk_bdev 00:55:51.579 CC test/nvme/fdp/fdp.o 00:55:52.955 LINK fdp 00:55:53.213 CC test/nvme/cuse/cuse.o 00:55:56.507 LINK cuse 00:55:59.047 CC examples/nvme/hello_world/hello_world.o 00:55:59.616 CC test/accel/dif/dif.o 00:55:59.616 LINK hello_world 00:56:00.998 LINK dif 00:56:05.191 CC examples/nvme/reconnect/reconnect.o 00:56:06.128 LINK reconnect 00:56:10.324 CC examples/nvme/nvme_manage/nvme_manage.o 00:56:11.706 LINK nvme_manage 00:56:13.085 CC examples/nvme/arbitration/arbitration.o 00:56:14.475 LINK arbitration 00:56:15.872 CC test/blobfs/mkfs/mkfs.o 00:56:16.810 LINK mkfs 00:56:17.380 CC test/lvol/esnap/esnap.o 00:56:29.601 CC examples/nvme/hotplug/hotplug.o 00:56:29.601 LINK hotplug 00:56:30.169 LINK esnap 00:56:42.381 CC examples/nvme/cmb_copy/cmb_copy.o 00:56:43.315 LINK cmb_copy 00:56:43.574 CC examples/nvme/abort/abort.o 00:56:46.107 LINK abort 00:56:58.317 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:56:58.317 LINK pmr_persistence 00:57:44.999 CC examples/accel/perf/accel_perf.o 00:57:44.999 CC examples/blob/hello_world/hello_blob.o 00:57:44.999 CC examples/blob/cli/blobcli.o 00:57:44.999 LINK hello_blob 00:57:44.999 LINK accel_perf 00:57:44.999 LINK blobcli 00:57:51.611 CC test/bdev/bdevio/bdevio.o 00:57:51.611 LINK bdevio 00:58:03.818 CC examples/bdev/bdevperf/bdevperf.o 00:58:03.818 CC examples/bdev/hello_world/hello_bdev.o 00:58:04.077 LINK hello_bdev 00:58:04.336 LINK bdevperf 00:59:12.016 CC examples/nvmf/nvmf/nvmf.o 00:59:12.017 LINK nvmf 00:59:30.100 19:26:28 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:59:30.100 make[1]: Nothing to be done for 'clean'. 00:59:34.292 19:26:34 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:59:34.292 19:26:34 -- common/autotest_common.sh@730 -- $ xtrace_disable 00:59:34.292 19:26:34 -- common/autotest_common.sh@10 -- $ set +x 00:59:34.292 19:26:34 -- spdk/autopackage.sh@48 -- $ timing_finish 00:59:34.292 19:26:34 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:59:34.292 19:26:34 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:59:34.292 19:26:34 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:59:34.292 19:26:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:59:34.292 19:26:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:59:34.292 19:26:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:59:34.292 19:26:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:59:34.292 19:26:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:59:34.292 19:26:34 -- pm/common@44 -- $ pid=175958 00:59:34.292 19:26:34 -- pm/common@50 -- $ kill -TERM 175958 00:59:34.292 19:26:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:59:34.292 19:26:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:59:34.292 19:26:34 -- pm/common@44 -- $ pid=175960 00:59:34.292 19:26:34 -- pm/common@50 -- $ kill -TERM 175960 00:59:34.292 + [[ -n 2151 ]] 00:59:34.292 + sudo kill 2151 00:59:34.304 [Pipeline] } 00:59:34.323 [Pipeline] // timeout 00:59:34.328 [Pipeline] } 00:59:34.345 [Pipeline] // stage 00:59:34.350 [Pipeline] } 00:59:34.367 [Pipeline] // catchError 00:59:34.376 [Pipeline] stage 00:59:34.379 [Pipeline] { (Stop VM) 00:59:34.393 [Pipeline] sh 00:59:34.678 + vagrant halt 00:59:37.967 ==> default: Halting domain... 00:59:47.980 [Pipeline] sh 00:59:48.259 + vagrant destroy -f 00:59:50.795 ==> default: Removing domain... 00:59:51.744 [Pipeline] sh 00:59:52.026 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest_2/output 00:59:52.035 [Pipeline] } 00:59:52.051 [Pipeline] // stage 00:59:52.057 [Pipeline] } 00:59:52.073 [Pipeline] // dir 00:59:52.079 [Pipeline] } 00:59:52.097 [Pipeline] // wrap 00:59:52.103 [Pipeline] } 00:59:52.118 [Pipeline] // catchError 00:59:52.128 [Pipeline] stage 00:59:52.130 [Pipeline] { (Epilogue) 00:59:52.144 [Pipeline] sh 00:59:52.425 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:00:10.521 [Pipeline] catchError 01:00:10.523 [Pipeline] { 01:00:10.539 [Pipeline] sh 01:00:10.823 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:00:11.082 Artifacts sizes are good 01:00:11.091 [Pipeline] } 01:00:11.108 [Pipeline] // catchError 01:00:11.119 [Pipeline] archiveArtifacts 01:00:11.126 Archiving artifacts 01:00:11.490 [Pipeline] cleanWs 01:00:11.504 [WS-CLEANUP] Deleting project workspace... 01:00:11.504 [WS-CLEANUP] Deferred wipeout is used... 01:00:11.531 [WS-CLEANUP] done 01:00:11.532 [Pipeline] } 01:00:11.549 [Pipeline] // stage 01:00:11.554 [Pipeline] } 01:00:11.570 [Pipeline] // node 01:00:11.575 [Pipeline] End of Pipeline 01:00:11.610 Finished: SUCCESS